url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://nasa-develop.github.io/dnppy/dev_pages/contrib_git.html
|
# GitHub Best Practices¶
A basic understanding of Git is key to expanding dnppy, as it is with many other large software packages such as numpy, scipy, gdal and others you may be familiar with. If you’d like an introduction to git, checkout Git Basics. There are are a small set of best practices that should be employed where possible.
## What to include in a Git repo¶
There are certain files that you might want to store near your development environment, but that you do not want to upload to your repository. Git uses a special file called .gitignore that can be used to ignore specific files or directories in a repository, which allows them to peacefully exist in your local repository without tracking their changes and updating them in the origin on GitHubs website. The .gitignore for dnppy is already set up to ignore several files by name, as well as several files by extension such as .pyc's.
Things to include in your repository:
• Any kind of raw text or code. Tracking changes in text is what git was built for!
• Helpful description files such as README.md that can be interpreted by GitHub and displayed while browsing.
• Small static assets or images that are helpful for documentation purposes that do not frequently change.
Things to omit from your git repository
• Raster data! you never ever want to include raster data in your online repository, as this data will then be permanently included in your git and dramatically decrease performance in every aspect.
• Any kind of fairly constant binary data that github cannot interpret as text. These types of files are best stored in the release assets independent from repository tracking.
## Committing to the master branch¶
We typically direct people to simply download the most recent version of the master branch. As a general rule, the master branch should always be “deployable”, meaning it should work reliably.
• Simple bug fixes can be committed directly to the master branch.
• Commits in which changes to documentation, docstrings, or comments to improve clarity, but preserve function can be committed directly to the master branch.
When adding some kind of new functionality, you should always create a new branch for development and testing. When you are satisfied with the new additions, you can then merge that branch with the master branch with a pull request. Learn more about the GitHub Flow
Note
We started out as noobs, and did not institute proper git workflow with dnppy from the beginning. Nothing terrible happened, but some things were more difficult than they would have been otherwise. Just do your best, and learn as much as you can!
## Versioning¶
We expect that a new version should be always be released immediately after, and sometimes immediately before every DEVELOP term. This is due in part to the inclusion of the undeployed/proj_code folder and the fact that it needs to be made available for project partners very quickly.
version numbers
The numbering is pretty simple, and takes the format of
[major_revision].[two_digit_year].[minor_revision][beta tag]
The major revision is reserved for very large changes. When dnppy reaches complete arcpy independence or upgrades to use python 3.0, an increase in the major revision number would be justified. It is difficult for us to know what future scenarios may arise, but anything that changes the major revision number should be a pretty big deal. The two digit year is a simple record keeping device to associate a version of dnppy with NASA’s fiscal year, which turns over each September. The minor revision is more or less at your discretion, but should always rise as changes are made. The beta tag is used to say dnppy is still in beta. version changes with bug fixes and other things characteristic of a young software package still in beta can probably use a new digit after the beta tag. so, 1.16.1b0 to 1.16.1b1, and so on. Eventually we should drop the beta tag and exit the “beta” phase.
Example revision schedule:
• dnppy 1.16.1b0 Fall 2015, at the beginning of FY16
• dnppy 1.16.1b1 Bug fixes in 1b0
• dnppy 1.16.2b0 Miscellaneous update that required new version number
• dnppy 1.16.2b1 Fall 2015, end of the term update with new proj_code for partners
• dnppy 1.16.4b0 Spring 2016, mid term fixes of something
• dnppy 1.16.5b0 Spring 2016, end of the term update with new proj_code for partners
• ...
• dnppy 1.17.1b0 Fall 2016, beginning of FY17
|
2020-02-27 17:00:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20512640476226807, "perplexity": 2539.505532653804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146744.74/warc/CC-MAIN-20200227160355-20200227190355-00301.warc.gz"}
|
https://diamondbodysculpting.com/docs/sqtrva1.php?15fed1=what-kind-of-weapon-is-a-falchion
|
months[0] = " Discover the vast range of useful, leisure and educational websites published by the Siteseen network. There are many historical weapons that qualify for this type. The weapons were described by having a falchion-like blade that was fixed on a wooden shaft that measured around one to two feet in length; it was also said that it curved at the end just like an umbrella. A sword. Meaning of Falchion Join now. Two basic types can be identified: Ongoing research by James Elmslie has produced a typology covering both Falchion and Messer blade designs. The ancient falchions that have been discovered are incredibly thin and on average, lighter than a double-edged blade. the Middle Ages ( Medieval There are only a few actual falchions that have survived until the present, yet since there are numerous producers who actually replicate the weapon, one can simply browse through a plethora of online weapon shops to actually purchase a falchion sword for sale. The Falchion is a weapon which is called by different names and occurs in different shapes. months[4] = " Explore the interesting, and fascinating selection of unique websites created and produced by the Siteseen network. "; The falchion was said to be a peasant’s tool or weapon but this was not always the case when it comes to this piece the Conyers falchion was said to have belonged to a landed family and it was also a weapon that has appeared in various illustrations of mounted knights in combat. The type seems to be confined to the 13th and 14th centuries. There were several types of falchions during the eleventh throughout the sixteenth century and some of these resembled a knife more than a sword; aside from these, there were also other types that appeared with an irregular and pointed shape. full details of all Two basic types of falchion can be identified: Cleaver falchions. The blade designs of falchions varied widely across the continent and through the ages. The medieval Falchion had a curve one-edged blade, while the European version had a short back edge. At times, it is presumed that this type of sword had lower status and quality compared to the more expensive and long swords. [4] This blade style may have been influenced by the Turko-Mongol sabres that had reached the borders of Europe by the 13th century. It is defined as a broad, short sword having a convex edge curving sharply to the point. Falchions are found in different forms from around the 13th century up to and including the 16th century. © All rights reserved MedievalBritain.com 2020. (adsbygoogle = window.adsbygoogle || []).push({}); Meaning of Falchion warfare in addition to a heavy scimitar. Falchions are found in different forms from around the 13th century up to and including the 16th century. interesting facts and "; understanding the strategy Your Comment. The medieval Falchion had a curve one-edged blade, while the European version had a short back edge. 0 ; 0; New Answer; Post New Answer. sword - Falchion sword which This type of sword continues in use into the 16th century. The falchion is available in two types: the fully functional piece which can be utilized for cutting practice and training, or the decorative piece which can be added to a collector’s growing set of medieval pieces. sword When it comes to the falchion, there is no specific origin that has been identified for the weapon; there were some who claimed that the falchion sword first came from sharp and pointed farming tools but numerous historians have disagreed. Under this system, all known falchions can be described as types 1 – 5 (with subtypes a – e used for any given type) as well as 5 levels of curvature. and costs 20 gp. Performance & security by Cloudflare, Please complete the security check to access. The falchion sword was featured in a variety of forms that have appeared during the thirteenth up to the sixteenth centuries wherein a couple of the falchion blades appeared just like the seax, as … In addition to the previously mentioned falchion types, there was a group of the thirteenth and early fourteenth century blades that were commonly identified as falchions. peasants. Gothic) style. allowed knights to practise Answer of this Question "What kind of weapon is a falchion?" and information about injury on his opponent. The weapons used during information about Medieval Ask your question. The Crescent Falchion +1 in 4-1 is one of the more powerful and versatile weapons … Hello!The $$\bf{Falchion}$$ was one of the most popular weapons used in the ancient times. A falchion is a one-handed, single-edged sword of European origin, whose design is reminiscent of the Chinese dadao, and modern machete. The falchion sword that was found in the area was said to have an equally broad blade throughout the length that was also straight unlike other swords. Some historians say the sword is related to the Dark Ages’ long knife, seax or scramasax. There were some who have also assumed that the falchion sword was based on the scramasax of the Franks which was described as a single-edged knife utilized for fights; records have also stated that the single-edged weapons were seen in Scandinavia where a lot of Vikings utilized the piece. months[5] = " Uncover a wealth of facts and information on a variety of subjects produced by the Siteseen network. It was a great weapon that combined the power and weight of an axe but was similarly versatile as the sword. was a low quality sword their Falchion sword By continuing, you agree to Quizzclub's Terms of Service, Privacy Policy, Cookie use and receive daily trivia quizzes from QuizzClub via email, Copyright © 2020 quizzclub.com. The falchion sword was known to be a single-edged and one-handed weapon that was of European origin and has a design that is similar to the Chinese dadao and the contemporary machete. A weapon used in of the information and "; If you continue to use this site you understand and agree to the use of cookies and accept them. These weapons were therefore not cleaving or chopping weapons similar to the machete, but quick slashing weapons more similar to shamshir or sabres despite their wide blade.
|
2021-01-18 23:29:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25196611881256104, "perplexity": 2631.66084081774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517159.7/warc/CC-MAIN-20210118220236-20210119010236-00797.warc.gz"}
|
http://psychology.wikia.com/wiki/Adaptive_learning
|
## Wikia
34,200pages on
this wiki
Educational Psychology: Assessment · Issues · Theory & research · Techniques · Techniques X subject · Special Ed. · Pastoral
Adaptive learning is an educational method which uses computers as interactive teaching devices. Computers adapt the presentation of educational material according to students' learning needs, as indicated by their responses to questions and tasks. This model originates in the radical behaviourist movement of the 1950s and the unrealized promise of B.F. Skinner's teaching machines and programmed instruction.[1] The motivation is to allow electronic education to incorporate the value of the interactivity afforded to a student by an actual human teacher or tutor. The technology encompasses aspects derived from various fields of study including computer science, education, and psychology.
Adaptive learning has been partially driven by a realization that tailored learning cannot be achieved on a large-scale using traditional, non-adaptive approaches. Adaptive learning systems endeavor to transform the learner from passive receptor of information to collaborator in the educational process.[2] Adaptive learning systems' primary application is in education, but another popular application is business training. They have been designed as both desktop computer applications and web applications.
Adaptive learning has also been known as adaptive educational hypermedia, computer-based learning, adaptive instruction, intelligent tutoring systems, and computer-based pedagogical agents.
## HistoryEdit
Adaptive learning or Intelligent tutoring has its origins in the artificial-intelligence movement and began gaining popularity in the 1970s. At that time, it was commonly accepted that computers would eventually achieve the human ability of adaptivity. In adaptive learning, the basic premise is that the tool or system will be able to adjust to the student/user's learning method, which results in a better and more effective learning experience for the user. Back in the 70's the main barrier was the cost and size of the computers, rendering the widespread application impractical. Another hurdle in the adoption of early intelligent systems was that the user interfaces were not conducive to the learning process.
It was not until AutoTutor which was developed by the Institute of Intelligent System around the turn of the century that adaptive learning systems got a voice. This was a major step in adaptive learning systems because it added another medium in communication with the end user. According to the founder and lead on the AutoTutor project - Graesser - "Spoken computational environments may foster social relationships that may enhance learning." Also, in some applications audio content is a necessity, such as in language learning applications. Today, the number of new adaptive learning system companies is growing steadily as more classrooms are becoming computerized and other industries are finding uses for the applications of adaptive learning such as professional development.
## Technology and methodologyEdit
Adaptive learning systems have traditionally been divided into separate components or 'models'. While different model groups have been presented, most systems include some or all of the following models (occasionally with different names):[3][4][5]
• Expert model - The model with the information which is to be taught
• Student model - The model which tracks and learns about the student
• Instructional model - The model which actually conveys the information
• Instructional environment - The user interface for interacting with the system
### Expert modelEdit
The expert model stores information about the material which is being taught. This can be as simple as the solutions for the question set but it can also include lessons and tutorials and, in more sophisticated systems, even expert methodologies to illustrate approaches to the questions.
Adaptive learning systems which do not include an expert model will typically incorporate these functions in the instructional model.
### Student modelEdit
Student model algorithms have been a rich research area over the past twenty years. The simplest means of determining a student's skill level is the method employed in CAT (Computer Adaptive Testing). In CAT, the subject is presented with questions that are selected based on their level of difficulty in relation to the presumed skill level of the subject. As the test proceeds, the computer adjusts the subject's score based on their answers, continuously fine-tuning the score by selecting questions from a narrower range of difficulty.
An algorithm for a CAT-style assessment is simple to implement. A large pool of questions is amassed and rated according to difficulty, either through expert analysis, experimentation, or a combination of the two. The computer then performs what is essentially a binary search, always giving the subject a question which is half way between what the computer has already determined to be the subject's maximum and minimum possible skill levels. These levels are then adjusted to the level of the difficulty of the question, reassigning the minimum if the subject answered correctly, and the maximum if the subject answered incorrectly. Obviously, a certain margin for error has to be built in to allow for scenarios where the subject's answer is not indicative of their true skill level but simply coincidental. Asking multiple questions from one level of difficulty greatly reduces the probability of a misleading answer, and allowing the range to grow beyond the assumed skill level can compensate for possible misevaluations.
Richer student model algorithms look to determine causality and provide a more extensive diagnosis of student's weaknesses by linking 'concepts' to questions and defining strengths and weaknesses in terms of concepts rather than simple 'levels' of ability. Because multiple concepts can influence a single question, questions have to be linked to all relevant concepts. For example, a matrix can list binary values (or even scores) for the intersection of every concept and every question. Then, conditional probability values have to be calculated to reflect the likelihood that a student who is weak in a particular concept will fail to correctly answer a particular question. A student takes a test, the probabilities of weakness in all concepts conditional on incorrect answers in all questions can be calculated using Bayes' Law (these adaptive learning methods are often called bayesian algorithms).[6]
A further extension of identifying weaknesses in terms of concepts is to program the student model to analyze incorrect answers. This is especially applicable for multiple choice questions. Consider the following example:
Q. Simplify: $2x^2 + x^3$
a) Can't be simplified
b) $3x^5$
c) ...
d) ...
Clearly, a student who answers (b) is adding the exponents and failing to grasp the concept of like terms. In this case, the incorrect answer provides additional insight beyond the simple fact that it is incorrect.
### Instructional modelEdit
The instructional model generally looks to incorporate the best educational tools that technology has to offer (such as multimedia presentations) with expert teacher advice for presentation methods. The level of sophistication of the instructional model depends greatly on the level of sophistication of the student model. In a CAT-style student model, the instructional model will simply rank lessons in correspondence with the ranks for the question pool. When the student's level has been satisfactorily determined, the instructional model provides the appropriate lesson. The more advanced student models which assess based on concepts need an instructional model which organizes its lessons by concept as well. The instructional model can be designed to analyze the collection of weaknesses and tailor a lesson plan accordingly.
When the incorrect answers are being evaluated by the student model, some systems look to provide feedback to the actual questions in the form of 'hints'. As the student makes mistakes, useful suggestions pop up such as "look carefully at the sign of the number". This too can fall in the domain of the instructional model, with generic concept-based hints being offered based on concept weaknesses, or the hints can be question-specific in which case the student, instructional, and expert models all overlap.
## ImplementationsEdit
### Classroom implementationEdit
Adaptive learning that is implemented in the classroom environment using information technology is often referred to as an Intelligent Tutoring System or an Adaptive Learning System. Intelligent Tutoring Systems operate on three basic principles:[7]
• Systems need to be able to dynamically adapt to the skills and abilities of a student.
• Environments utilize cognitive modeling to provide feedback to the student while assessing student abilities and adapting the curriculum based upon past student performance.
• Inductive logic programming (ILP) is a way to bring together inductive learning and logic programming to an Adaptive Learning System. Systems using ILP are able to create hypothesis from examples demonstrated to it by the programmer or educator and then use those experiences to develop new knowledge to guide the student down paths to correct answers.
• Systems must have the ability to be flexible and allow for easy addition of new content.
• Cost of developing new Adaptive Learning Systems is often prohibitive to educational institutions so re-usability is essential.
• School districts have specific curriculum that the system needs to utilize to be effective for the district. Algorithms and cognitive models should be broad enough to teach mathematics, science, and language.
• Systems need to also adapt to the skill level of the educators.
• Many educators and domain experts are not skilled in programming or simply do not have enough time to demonstrate complex examples to the system so it should adapt to the abilities of educators.
### Distance learning implementationEdit
Adaptive Learning systems can be implemented on the Internet for use in distance learning and group collaboration applications.
The field of distance learning is now incorporating aspects of adaptive learning. Initial systems without adaptive learning were able to provide automated feedback to students who are presented questions from a preselected question bank. That approach however lacks the guidance which teachers in the classroom can provide. Current trends in distance learning call for the use of adaptive learning to implement intelligent dynamic behavior in the learning environment.
During the time a student spends learning a new concept they are tested on their abilities and databases track their progress using one of the models. The latest generation of distance learning systems take into account the students' answers and adapt themselves to the student's cognitive abilities using a concept called 'cognitive scaffolding'. Cognitive scaffolding is the ability of an automated learning system to create a cognitive path of assessment from lowest to highest based on the demonstrated cognitive abilities.[8] A current successful implementation of adaptive learning in web-based distance learning is the Maple engine of WebLearn by RMIT university.[9] WebLearn is advanced enough that it can provide assessment of questions posed to students even if those questions have no unique answer like those in the Mathematics field.
Group collaboration is also a hot field in the adaptive learning research area. Group collaboration is a key field in Web 2.0 which extends the functionality of distance learning. Adaptive learning can be incorporated to facilitate collaboration within distance learning environments like forums or resource sharing services.[10] Some examples of how adaptive learning can help with collaboration include:
• Automated grouping of users with the same interests.
• Personalization of links to information sources based on the user's stated interests or the user's surfing habits.
## Companies currently using adaptive learning technologyEdit
A 2013 report commissioned by the Gates Foundation provides a list of almost forty companies currently active in adaptive learning technology.[11]
• ALEKS Corporation, an online assessment and learning company, uses adaptive questioning to quickly and accurately determine what a student knows and doesn't know in a course.
• Carnegie Learning, a publisher of math curricula, offers adaptive math software (known as the Cognitive Tutor) to high school students, along with traditional textbook offerings.
• Cengage Learning, a publishing company whose Aplia product provides adaptive learning technology for Developmental English.
• DreamBox, an adaptive learning platform with individualized paths for personalized learning.
• eSpindle Learning, a nonprofit maintaining an online vocabulary and spelling coaching program based on the adaptive learning concept.
• Grockit
• Knewton, an online learning company, currently uses adaptive learning technology for its online test-prep courses and plans to apply it to a wide range of educational markets.[12]
• KnowRe, an adaptive learning solution for mathematics that provides real-time assessments, an individualized curriculum tailored to each student, and an engaging learning experience in a game-like environment.[13]
• McGraw-Hill Education, a content, software and services-based education company that most notably uses adaptive learning technology in McGraw-Hill LearnSmart, which is available for college students, and the Power of U, a math program for middle school students.
• Pearson, an education company whose adaptive SuccessMaker software provides elementary and middle school reading and math instruction.
• PrepMe, an online learning company, currently uses adaptive learning technology for test preparation, K-12 education, and professional development.[14]
• Sherston Software, a UK education software company, offers PlanetSherston, an adaptive learning platform.
• Smart.fm, a social learning and community website, uses adaptive learning technology with the goal of increasing learning speed and retention.[15]
• Smart Sparrow, has an adaptive learning platform that offers instructional designers and teachers integrated tools to create, publish and analyse their own adaptive content.[16]
## ReferencesEdit
1. B.F. Skinner and the Teaching Machines.
2. includeonly>Paramythis and Reisinger. "Adaptive Learning Environments and e-Learning Standards", Electronic Journal of eLearning 2004. Retrieved on 2010-03-13.
3. Charles P. Bloom, R. Bowen Loftin Facilitating the Development and Use of Interactive Learning Environments, Lawrence Erlbaum Associates (1998).
4. What is an Intelligent Tutoring System?. URL accessed on August 6, 2008.
5. A Proposed Student Model Algorithm for Student Modeling and its Evaluation. URL accessed on August 6, 2008.
6. A Bayesian Diagnostic Algorithm for Student Modeling and its Evaluation. URL accessed on August 6, 2008.
7. Adaptive Learning Systems - National Institute of Standards and Technology. URL accessed on August 17, 2008.
8. Cognitive scaffolding for a web-based adaptive learning environment. URL accessed on August 17, 2008.
9. Addressing Different Cognitive Levels for On-line Learning. URL accessed on August 17, 2008.
10. Towards web-based adaptive learning communities. URL accessed on August 17, 2008.
|
2016-08-24 10:24:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.302654892206192, "perplexity": 2088.3843520152864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292151.8/warc/CC-MAIN-20160823195812-00079-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://chemistry.stackexchange.com/questions/82470/redundant-out-of-plane-bends-in-gaussian-not-recognised
|
# Redundant out of plane bends in gaussian not recognised
I'm having an issue with something that shouldn't be that complicated. I'm trying to generate some surfaces with Gaussian to look at amine planarisations, but the modredundant command doesn't seem to be recognising the out-of-plane bend coordinate type, spitting out the error: "Unrecognised coordinate type "O"".
I've tried different coordinate types, they all work, but it seems this one specifically is not working. Does anybody have any ideas to help me? Code below:
%chk=MR-test.chk
# opt=modredundant tda=(nstates=6) cam-b3lyp/6-31++G(d,p) geom=connectivity
Amine planarisation test
0 1
N 0.14115899 0.72065377 0.00000000
C 0.63114217 -0.66528146 0.00000000
H 1.70114039 -0.66529528 0.00195365
C 0.62897198 1.41283584 1.20159164
H 0.27231899 2.42164635 1.20159076
H 0.27070141 0.90787342 2.07426291
H 1.69897019 1.41282074 1.20354879
C 0.63336050 1.41438736 -1.19890424
H 0.27828574 0.91055325 -2.07353165
H 0.27670608 2.42319736 -1.19890424
H 1.70335872 1.41437379 -1.19694825
C 0.12009706 -1.39042562 -1.25880851
H 0.78378957 -1.19762146 -2.07565538
H 0.08029786 -2.44332208 -1.07246188
H -0.85887294 -1.03416700 -1.50293458
C 0.11550541 -1.39204895 1.25599672
H 0.72562653 -1.13043658 2.09516963
H -0.89643994 -1.10106296 1.44624332
H 0.15906269 -2.44960896 1.09924986
1 2 1.0 4 1.0 8 1.0
2 3 1.0 12 1.0 16 1.0
3
4 5 1.0 6 1.0 7 1.0
5
6
7
8 9 1.0 10 1.0 11 1.0
9
10
11
12 13 1.0 14 1.0 15 1.0
13
14
15
16 17 1.0 18 1.0 19 1.0
17
18
19
O 1 8 4 2 S 25 2.000000
I've tried generating the input both with GaussView and typing it myself. It really doesn't seem to like out-of-plane bends. Thanks in advance.
• I couldn't get this to run either. From looking at the mod redundant coordinate in Gaussview, I notice that it does not give a degree measure for the out of plane bend initially, so it doesn't seem to be measuring anything right from the start. – Tyberius Sep 11 '17 at 18:42
|
2021-06-22 23:34:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3785841763019562, "perplexity": 2543.3955760301496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488525399.79/warc/CC-MAIN-20210622220817-20210623010817-00197.warc.gz"}
|
https://www.centralbanking.com/central-banking-journal/opinion/2411012/five-problems-with-floating-rate-exchange-regimes
|
# Five problems with floating rate exchange regimes
1. Why this catechism?
The decline in the US dollar price of the euro by 25% from $1.38 to$1.05 has led to a dramatic change in the cost and profit relationships between the production of goods in the eurozone and the production of similar goods in the US. As a hypothetical example consider Deutsche Engineering, which has a plant in Stuttgart
that exports to the US and a similar plant in Mobile, Alabama, which also serves the US market. When the euro traded at \$1.38, the profit-to-sales ratio o
#### Latest issue
###### Central Banking Journal
Read the latest edition of the Central Banking journal
|
2018-08-16 00:47:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20460288226604462, "perplexity": 3612.4083556557243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210387.7/warc/CC-MAIN-20180815235729-20180816015729-00339.warc.gz"}
|
http://mathoverflow.net/questions/92718/is-the-tensor-product-of-a-power-series-ring-and-a-field-noetherian/92845
|
# Is the tensor product of a power series ring and a field noetherian?
Suppose that $k$ is an algebraically closed field. Let $F/k$ be a (possibly non-finitely generated) field extension. Is
$$k[[x]] \otimes_{k} F$$
noetherian?
If not, is the natural map $k[[x]] \otimes_{k} F \to F[[x]]$ injective?
-
The natural map is injective, by a simple argument: Let $T$ be a tensor in $k\left[\left[x\right]\right]\otimes_k F$ which gets mapped to $0$ by this map. Then, we can write $T$ as $\sum\limits_{i\in I} s_i \otimes f_i$ for some finite set $I$, some $s_i\in k\left[\left[x\right]\right]$ and some linearly independent $f_i\in F$. Now, the condition that $T$ gets mapped to $0$ by the natural map rewrites as $\sum\limits_{i\in I} f_is_i=0$. Hence, every $j\in\mathbb N$ satisfies $\sum\limits_{i\in I} f_i\left(s_i\right)_j=0$, where we treat power series in $k\left[\left[x\right]\right]$ ... – darij grinberg Mar 31 '12 at 3:17
... as sequences of elements of $k$. But due to the linear independence of the $f_i$, this yields that $\left(s_i\right)_j = 0$ for all $i$ and $j$, and thus $T=0$. – darij grinberg Mar 31 '12 at 3:18
I gave a wrong answer sometime ago, using injectivity. As it seems ok, let me try again:starting with infinitely many power series $\sum_ia_n^{i}x^i$ with $a_n\in\mathbb{C}$ all algebraically independent, the ideal generated by all of these is not generated by finitely many of them, I guess, for an argument of transcendence degree. But I still lack a neat proof, so I do not post it as an answer. – Filippo Alberto Edoardo Mar 31 '12 at 6:12
Darij's argument show in fact that for any field $k$ and any $k$-vector space $V$, the natural map $k^{\mathbf{N}} \otimes V \to V^{\mathbf{N}}$ is injective (it is bijective if and only if $V$ is finite dimensional). – François Brunault Mar 31 '12 at 11:33
The answer is no: for example, $k[[x]]\otimes_k k((x))$ is not noetherian.
Indeed, if it were, so would be $k((x))\otimes_k k((x))$.
But this would contradict the following interesting general theorem of Vámos:
Given an extension of fields $K/F$ the tensor product $K\otimes_F K$ is noetherian if and only if $K$ is finitely generated as a field over $F$.
Full confession
I have only read an abstract of Vámos's article because I have no access to it. Anyway, here is the reference:
P. Vámos, On the minimal prime ideal of a tensor product of two fields, Math. Proc. Cambridge Philos. Soc. 84 (1978), no. 1, p.25-35.
-
Every proper inclusion of subfields $F\subset L_1\subset L_2\subset K$ gives rise to a non-injective surjective ring homomorphism $K\otimes_{F_1} K\to K\otimes_{F_2} K$. So, in Vamos' theorem, the "only if" directly follows, and the "if" condition is equivalent to the fact that every sub-extension of a finitely generated extension of $F$ is itself finitely generated. The latter fact is stated with no reference in Wiki's page on the 14th Hilbert problem en.wikipedia.org/wiki/Hilbert's_fourteenth_problem – YCor Mar 31 '12 at 13:16
[of course I means $L_i=F_i$]. Note that the immediate implication in my comment is that if $K$ is infinitely generated then $K\otimes_F K$ is not noetherian (this is enough for Georges's example). For the converse, let $L\subset K$ be the field generated by a transcendence basis $x_1,\dots,x_d$, so $B=L\otimes_F L$ is a localization of $F[x_1,\dots,x_d,y_1,\dots,y_d]$ so is noetherian, and $K\otimes_F K$ is a finitely generated $B$-algebra so is noetherian as well. (By my previous remark, as a corollary every subfield of a f.g. field is f.g.) – YCor Apr 1 '12 at 15:53
I was emailed the following argument:
We prove that $k[[x]]\otimes_{k} k((x))$ is not noetherian by showing directly that $k((x)) \otimes k((x))$ is not noetherian (as suggested by Georges Elencwajg). I will just handle the case $k=\bar{\mathbb{Q}}$ and then make a remark about the general case at the end.
The field $k((x))$ has only countably many finite separable extensions because every such extension is obtained by adjoining a root of $x$. On the other hand, the transcendence degree of $k((x))$ over $k$ must be uncountable because $k((x))$ is uncountable and $k$ is countable. Fix a transcendence basis $( t_i )_{i \in I}$ for $k((x))$ over $k$.
The algebraic extension $k((x))$ of $K := k((t_i))$ is algebraic with infinite separable degree. Indeed, if the separable degree was finite, then $K$ would admit at most countably many finite separable extensions as this is true for $k((x))$. This is absurd because $( t_i )$ is uncountable.
Because the separable degree of $k((x))$ over $k((t_i))$ is infinite, $(k((x)) \otimes_{K} k((x)))_{\text{red}}$ has infinitely many idempotents and so
$$k((x)) \otimes_{K} k((x))$$ and hence $$k((x)) \otimes_{k} k((x))$$
are non-noetherian. This completes the proof.
With work, this proof can be modified to hold when $k$ is a finite field $\mathbb{F}$. In this case, one must argue more carefully to show that $k((x))$ has only countably many finite separable extensions. (The email indicated that one should use local compactness together with Krasner's Lemma.) Finally, one can deduce the case of a more general field $k$ from the case $k=\bar{\mathbb{Q}}$ or $\mathbb{F}$ by using a faithfully flat descent argument.
-
|
2014-08-23 15:20:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9713335037231445, "perplexity": 125.71191723946653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826259.53/warc/CC-MAIN-20140820021346-00276-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://www.esaral.com/q/let-f-1-3-48375/
|
Let f(1,3)
Question:
Let $f(1,3) \rightarrow R$ be a function defined by $f(x)=\frac{x[x]}{1+x^{2}}$,
where $[x]$ denotes the greatest integer $\leq x$. Then the range of $f$ is:
1. (1) $\left(\frac{2}{5}, \frac{3}{5}\right) \cup\left(\frac{3}{4}, \frac{4}{5}\right)$
2. (2) $\left(\frac{2}{5}, \frac{1}{2}\right) \cup\left(\frac{3}{5}, \frac{4}{5}\right)$
3. (3) $\left(\frac{2}{5}, \frac{4}{5}\right)$
4. (4) $\left(\frac{3}{5}, \frac{4}{5}\right)$
Correct Option: , 2
Solution:
$f(x) \begin{cases}\frac{x}{x^{2}+1} ; & x \in(1,2) \\ \frac{2 x}{x^{2}+1} ; & x \in[2,3)\end{cases}$
$f^{\prime}(x) \begin{cases}\frac{1-x^{2}}{1+x^{2}} ; & x \in(1,2) \\ \frac{1-2 x^{2}}{1+x^{2}} ; & x \in[2,3)\end{cases}$
$\therefore f(x)$ is a decreasing function
$\therefore \quad y \in\left(\frac{2}{5}, \frac{1}{2}\right) \cup\left(\frac{6}{10}, \frac{4}{5}\right]$
$\Rightarrow \quad y \in\left(\frac{2}{5}, \frac{1}{2}\right) \cup\left(\frac{3}{5}, \frac{4}{5}\right]$
|
2022-12-06 16:49:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9974623322486877, "perplexity": 1589.491206214972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711111.35/warc/CC-MAIN-20221206161009-20221206191009-00778.warc.gz"}
|
https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/voxel__grid_8hpp.html
|
Autoware.Auto
voxel_grid.hpp File Reference
This file defines the voxel grid data structure for downsampling point clouds. More...
#include <voxel_grid/config.hpp>
#include <voxel_grid/voxels.hpp>
#include <common/types.hpp>
#include <forward_list>
#include <unordered_map>
Include dependency graph for voxel_grid.hpp:
This graph shows which files directly or indirectly include this file:
Go to the source code of this file.
Classes
class autoware::perception::filters::voxel_grid::VoxelGrid< VoxelT >
A voxel grid data structure for downsampling point clouds. More...
Namespaces
autoware
This file defines the lanelet2_map_provider_node class.
autoware::perception
Perception related algorithms and functionality, such as those acting on 3D lidar data, camera data, radar, or ultrasonic information.
autoware::perception::filters
Classifiers and operations that act to reduce or organize data. Currently this namespace is strictly for point cloud filters, e.g. voxelgrid and ground filtering, but in the future it may include filtering for images and other functionality.
autoware::perception::filters::voxel_grid
Resources relating to the voxel grid package.
Detailed Description
This file defines the voxel grid data structure for downsampling point clouds.
|
2023-02-08 15:02:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21011115610599518, "perplexity": 9485.303920241084}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500813.58/warc/CC-MAIN-20230208123621-20230208153621-00197.warc.gz"}
|
https://web2.0calc.com/questions/if-someone-could-explain-this-to-me-it-would-be-really
|
+0
# If someone could explain this to me it would be really helpful! Thanks!
0
114
1
+144
Given that \(-3 \le 7x + 2y \le 3 \) and \(-4 \le y - x \le 4\), what is the maximum possible value of \(x + y\)?
If you could explain this it would be AMAZING! Thank you so much!
Oct 18, 2018
#1
+98090
+1
There may be an algebraic way to do this.....but I think it can be done graphically like this :
https://www.desmos.com/calculator/4fhytbda6x
It can be shown that some corner point [ a point in the feasible region of all the graphs] will maximize x + y [ the objective function]
Therea are two corner points here (-5/9, 31/9) and (5/9 , -31/9 )
Testing both points in the objective function,the corner point (-5/9, 31/9) will maxmize x + y =
-5/9 + 31/9 =
26/9
[ This is known as linear programming ]
Oct 18, 2018
|
2019-03-22 15:27:31
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9375675916671753, "perplexity": 1970.1782850338986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202671.79/warc/CC-MAIN-20190322135230-20190322161230-00233.warc.gz"}
|
https://lists.gnu.org/archive/html/bug-lilypond/2011-05/msg00001.html
|
bug-lilypond
[Top][All Lists]
## Re: TabStaff and glissando from note to chord (or the other way around)
From: Phil Holmes Subject: Re: TabStaff and glissando from note to chord (or the other way around) Date: Sun, 1 May 2011 11:23:24 +0100
```Is this a bug? (the comments in the example explain everything)
Thanks
\version "2.13.61"
music = \relative c' {
% connect wrong strings in TabStaff
dis\2\glissando <e\2 a\1>
<e\2 a\1>\glissando dis\2
% glissando direction in TabStaff
e8\2\glissando dis % correct
<e\2 a\1>\glissando <dis\2 gis\1> % correct
<e\2 a\1>4\glissando <dis\2 a'\1> % line from fret 5 to fret 4 should
be "up to down" as in previous glissandos
}
\new StaffGroup <<
\new Staff { \clef "G_8" \music }
\new TabStaff { \clef "moderntab" \music }
```
```
```
```
```
There are 2 bugs against glissandi in 13.61 - 1639 and 1640. I would assume your problem is related.
```
--
Phil Holmes
|
2019-07-19 05:02:50
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9423859119415283, "perplexity": 9217.089859765063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525974.74/warc/CC-MAIN-20190719032721-20190719054721-00330.warc.gz"}
|
https://techwhiff.com/learn/sketch-the-root-locus-double-2-poles/189120
|
# Sketch the root locus Double -2- Poles
###### Question:
Sketch the root locus
Double -2- Poles
##### Lindon Company is the exclusive distributor for an automotive product that sells for $40.00 per unit... Lindon Company is the exclusive distributor for an automotive product that sells for$40.00 per unit and has a CM ratio of 30%. The company's fixed expenses are $246,000 per year. The company plans to sell 23,700 units this year. Required: 1. What are the variable expenses per unit? (Round your ... 1 answer ##### The doctor orders phenobarbital 15 mg bid po. You have available phenobarbital 60 mg powder which... The doctor orders phenobarbital 15 mg bid po. You have available phenobarbital 60 mg powder which says to add 5.7 ml of sterile water to yield a total of 6 ml of solution. The child weighs 110 pounds. The Pediatric Reference states to give 0.4 mg/kg/24 hr. Pt weighs Kg. Total mg in 24 hrs ordered To... 1 answer ##### Choose the best Lewis structure for CH_2CI_2. Choose the best Lewis structure for CH_2CI_2.... 1 answer ##### How do you solve 6t \cdot \frac { 1} { 6} = 9? How do you solve 6t \cdot \frac { 1} { 6} = 9?... 1 answer ##### Solution steps plz k=0 x"+x=Ž8(t-2k1) 1. Solve the IVP x(0)=0, x'(0) = 0 and discuss the... Solution steps plz k=0 x"+x=Ž8(t-2k1) 1. Solve the IVP x(0)=0, x'(0) = 0 and discuss the behavior of the oscillator's amplitude when t → 00.... 1 answer ##### For 2013, what was the net cash inflow or outflow that Netflix realized associated with the... For 2013, what was the net cash inflow or outflow that Netflix realized associated with the issuance and/or repayment (i.e., redemption) of debt to investors? Ignore issuance costs. NETFLIX, INC. CONSOLIDATED STATEMENTS OF OPERATIONS (in thousands, except per share data) Revenues$ $2015 6,779,511 ... 1 answer ##### Pls help, pls names of formulas/procedures used would be appreciated very much. Thank you. The random... Pls help, pls names of formulas/procedures used would be appreciated very much. Thank you. The random variable X has a binomial distribution with n 18 and p = 0.647. (a) What is the mean of X? Round your answer to three decimal places (e.g. 98.765) (b) What value of X is least likely?... 1 answer ##### Part 1 An LC circuit is shown in the figure below. The 32 pF capacitor is... Part 1 An LC circuit is shown in the figure below. The 32 pF capacitor is initially charged by the 10 V battery when S is at position a. Then S is thrown to position b so that the capacitor is shorted across the 12 mH inductor What is the maximum value for the oscillating current assuming no resist... 1 answer ##### 10. (8 marks) Suppose Y, Y is a random sample of independent and identically distributed random... 10. (8 marks) Suppose Y, Y is a random sample of independent and identically distributed random variables with density function given by else a) (5 marks) By conditioning (definition 9.3) show that Uis sufficient for 0 b) (3 marks) By factorization (theorem 9.4) show that U- is sufficient for 0 Defi... 1 answer ##### Please explain how you get the answers especially for the multiplicity. Thanks in advance. 2. For... please explain how you get the answers especially for the multiplicity. Thanks in advance. 2. For each of the following compounds, label hydrogens in equivalent environments, indicate the integration value and multiplicity for each signal.... 1 answer ##### Question 3 review QUESTION 3 3. (60 points) What is the equivalent resistance of this circuit?... question 3 review QUESTION 3 3. (60 points) What is the equivalent resistance of this circuit? 1Ω W Α. 10Ω Β. 6Ω C. 2.6Ω D. 1.33Ω Ο Α. 10Ω Ο Β. 6Ω Ο C. 2.6Ω Ο D.1.33Ω... 1 answer ##### Sheridan Realty Corporation purchased a tract of unimproved land for$60,500. This land was improved and...
Sheridan Realty Corporation purchased a tract of unimproved land for $60,500. This land was improved and subdivided into building lots at an additional cost of$37,906. These building lots were all of the same size but owing to differences in location were offered for sale at different prices as fol...
|
2022-09-26 16:00:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19866561889648438, "perplexity": 9207.923458057716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00598.warc.gz"}
|
https://aviation.stackexchange.com/questions/21171/would-this-work-to-reduce-turbine-engine-spool-times
|
# Would this work to reduce turbine engine spool times?
In this answer to a question about reducing engine spool-up time, Peter Kämpf said:
When more thrust/lift is needed, the pilot adjusts pitch and fuel flow to set the desired thrust/lift level. Since the engine is already running at top speed, the change is not delayed by inertial effects.
It seems to me that landing mode or critical mode switch for the engines which would cause the FADEC to spin-up the engine and control power by fuel flow instead would eliminate the lag and increase the likelihood of survival during critical flight regimes.
As an example, encountering a micro-burst during landing will leave the plane wanting power as it flys out of the burst and has a tail-wind, often at low power and low altitude. If, in that situation, the pilot applying additional throttle increased the fuel flow and gave immediate power instead of having to wait for the engines to spool, it seems that would be very advantageous.
Have I understood the concepts correctly, and, if so, what drawbacks to this system might I be missing?
• I think Peter was writing about turboprops (and helicopter turbines) - high bypass turbofans don't have variable-pitch fan blades so can't run at full RPM when producing idle thrust. But see Turbomeca Astafan – RedGrittyBrick Sep 23 '15 at 19:12
• I got that, @RedGrittyBrick, I was latching on to the fuel flow portion of that statement. I'm not sure, however, if that's possible in a turbine engine. – FreeMan Sep 23 '15 at 19:21
• For the turbine to be running at high speed while the fan was idle, I guess you'd need a variable-ratio gearbox or variable-pitch fanblades - see RR UltraFan. – RedGrittyBrick Sep 23 '15 at 19:24
• @FreeMan The point is that in a turboprop, you can set the pitch so that more torque is needed, and RPM is maintained by injecting more fuel (compare pressing the accelerator going uphill in a car). There is no way to do this on current turbofans, so suddenly injecting extra fuel will just blow up your engine. – Sanchises Sep 23 '15 at 19:53
• @sanchises Actually, to be accurate, adding extra fuel in a current turbofan will make it spin faster (which may blow it up, but only if it goes too fast)... – Lnafziger Sep 24 '15 at 9:45
Short answer: no, none of what you describe is possible with current jet engines. You need more fuel because the angle of attack for your prop is increased, not the other way round.
Let's first go into some (simplified) detail how the turboprop works. A turboprop is basically a gas turbine engine with a propeller stuck on the axle. The propeller is the bit that converts the torque and rpm of the engine shaft into a forward thrust. The amount of forwards force is determined by RPM and blade pitch. This means that, even though you're at high RPM, you can 'feather' your propeller so that it doesn't produce any forward thrust (blades flat). At the same RPM, you can just put your blades so that they produce a lot of thrust. Of course, this comes at the cost of increased drag, so the engine has to 'work harder' to maintain RPM. This, in turn, is achieved by injecting more fuel.
Now, to a turbojet. For a turbojet, the thing that produces thrust is exhaust gas going backwards real fast. This is achieved by burning fuel so that the gas gets hot and expands, so that it wants to get out of there as fast as possible. For this, a jet engine needs a lot of air. It sucks in this air by a series of compressors at the front of the engine. Then, some fuel is injected, and after that there are some turbines that make sure the compressors can, and after that there's a nozzle that converts the leftover energy into backwards gas velocity. Now, we want to increase the thrust. This means that gas has to get out of the back faster. Since the nozzle is generally fixed, this means that you just have to get more air out of the back (just like a garden hose with the tap half open or full open). This means that we'll have to suck in some more air from the front, burn some more fuel to make sure the engine keeps spinning, and more air will come out of the back.
Simply increasing the fuel flow doesn't work. Jet engines operate 'lean' (more air than necessary), and the extra fuel you suddenly inject will burn up with all the excess air, which might blow up your engine. If your engine doesn't blow up, you're left with too little air to burn all the fuel, and you'll get a lot of nasty things like soot, carbon monoxide and unburnt fuel coming out of your exhaust. This means you just inject some extra fuel, wait for the engines to 'spool up' so they suck in more air, and like that gradually increase your engine RPM and linked to that the gas velocity.
Two things remain: why use fixed nozzles? This is because physics dictate that, for engines to be efficient, the exhaust velocity needs to be as high as possible; in practice, the exhaust velocity will be as close as possible to the speed of sound in the hot flue gas. Using a variable nozzle would mean that your engine never works efficiently at half power. Furthermore, it adds complexity, weight and thus costs more.
Another thing: I discussed turbojets, not turbofans. In theory, one could have variable blade pitch on a jet engine. However, remember that the fan is not just for thrust (even though high-bypass engines are generally more efficient), but also to make sure enough air enters the compressors; 'feathering' them would mean that your engine might not get enough air. However, your engine would need a lot of air, because it needs to operate very lean to make sure it can handle the extra fuel flow for when you increase the blade pitch. Furthermore, a variable pitch on all blades would mean a lot of complex engineering, which would make the engine a lot heavier.
Long story short: one could design jet engines with variable pitch, but the above wall of text is only the beginning of why that is an engineering nightmare. Without variable blade pitch or nozzles.
• very good answer! Would you think (given a variable nozzle) that an afterburner would do the job? – rul30 Sep 26 '15 at 6:48
• @rul30 Yes, it would give an instant power boost (in jet fighters, they're used for TOGA purposes). There are some... how shall we call it, practical problems with fitting flame-spitting engines on a passenger aircraft. Let's say the passengers in the back won't like being deaf and slightly crispy due to a 10 feet long fire straight out of hell. – Sanchises Sep 27 '15 at 10:46
|
2019-10-16 18:31:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5385191440582275, "perplexity": 1199.3625338869072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669057.0/warc/CC-MAIN-20191016163146-20191016190646-00152.warc.gz"}
|
https://huggingface.co/tals/albert-base-mnli
|
# Details
Model used in Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence (Schuster et al., NAACL 21).
For more details see: https://github.com/TalSchuster/VitaminC
When using this model, please cite the paper.
# BibTeX entry and citation info
@inproceedings{schuster-etal-2021-get,
title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence",
author = "Schuster, Tal and
Barzilay, Regina",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.52",
doi = "10.18653/v1/2021.naacl-main.52",
pages = "624--643",
abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",
}
`
|
2022-10-03 05:38:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.616308867931366, "perplexity": 7244.256452572976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00257.warc.gz"}
|
http://danboykis.com/posts/exercise-1-35-of-sicp/
|
Exercise 1.35 of SICP
Exercise 1.35: Show that the golden ratio $$\phi$$ (section 1.2.2) is a fixed point of the transformation $$x\mapsto 1 + \frac{1}{x}$$, and use this fact to compute $$\phi$$ by means of the fixed-point procedure.
(define tolerance 0.00001)
(define (fixed-point f first-guess)
(define (close-enough? v1 v2)
(< (abs (- v1 v2)) tolerance))
(define (try guess)
(let ((next (f guess)))
(if (close-enough? guess next)
next
(try next))))
(try first-guess))
> (fixed-point (lambda (x) (+ 1 (/ 1 x))) 1.0)
1.6180327868852458
Actual value of golden ratio: 1.61803399
Tolerance: 0.0001
$$\epsilon = \left|1.61803399-1.6180327868852458\right| = .0000012031147542 < 0.0001$$
Clearly $$\epsilon$$ is in the defined error tolerance.
|
2021-10-22 15:46:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8256505131721497, "perplexity": 11808.45287859205}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585516.51/warc/CC-MAIN-20211022145907-20211022175907-00399.warc.gz"}
|
https://www.numerade.com/questions/determine-whether-the-series-is-convergent-or-divergent-displaystyle-sum_k-1infty-ke-k2/
|
💬 👋 We’re always here. Join our Discord to connect with other students 24/7, any time, night or day.Join Here!
# Determine whether the series is convergent or divergent.$\displaystyle \sum_{k = 1}^{\infty} ke^{-k^2}$
## convergent
Sequences
Series
### Discussion
You must be signed in to discuss.
Lectures
Join Bootcamp
### Video Transcript
for this Siri's. We shall use the comparison test So the Siri's we can rewrite as from K equals one infinity of okay e to the negative k to the K. And what's important is that either the negative k to the K you know, really would be on the denominator. Sophie, write it that way. It might make it clear what we're about to do, since either the K to the K is always going to be bigger than just each of the k. No turning that off to the side either. The K the K is greater than either the k mom since Kei is greater than one or equal to. So that means that we end up with something smaller than Siri's of just K over you do the K or rewriting it back. Okay, e to the negative k power and this Siri's we know converges Bye ratio test. So if we have they an lesson or equal to be in and being converges, and in this case being you're there are Anne was our original Siri's, we see that are the serious that we want is always smaller than another conversion. Siri's So we have conversion. Bye compares
Sequences
Series
Lectures
Join Bootcamp
|
2021-10-25 06:19:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.614834725856781, "perplexity": 1189.0119867762974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587655.10/warc/CC-MAIN-20211025061300-20211025091300-00473.warc.gz"}
|
https://math.stackexchange.com/questions/781647/compute-the-max-flow-of-the-following-network-using-the-ford-fulkerson-algorith
|
# Compute the max flow of the following network, using the Ford-Fulkerson algorithm.
Compute the max flow for the following diagram, using the Ford-Fulkerson algorithm.
I computed the max flow, using the Ford-Fulkerson algorithm, and I got 14. However, the capacity of the min cut I found is 16, which does not verify the max-flow/min-cut theorem. Otherwise, if I did something wrong in finding the min-cut and/or max flow, let me know. It took me so much hours to work out this problem.
• Can you show us your work? Perhaps we can spot an error in your work. – ml0105 May 5 '14 at 4:19
Max flow is 14. The cut partition is $\{B, C, D, G\}$ indicated in Green to $\{A, E, F, H\}$ indicated in Purple. Cut edges are indicated in red. By Max-Flow Min-Cut Theorem, the simultaneous existence of a cut of capacity 14 and a flow of capacity 14 proves this optimum.
|
2021-03-07 15:46:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5724626779556274, "perplexity": 507.9091652539118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178377821.94/warc/CC-MAIN-20210307135518-20210307165518-00251.warc.gz"}
|
https://learn.careers360.com/ncert/question-given-a-non-empty-set-x-let-star-defined-from-px-cross-px-to-px-be-defined-as-a-star-b-equals-a-minus-b-union-b-minus-a-for-all-a-b-in-px-show-that-the-empty-set-phi-is-the-identity-for-the-operation-star-and-all-the-elements-a-of/
|
Q
# Given a non-empty set X, let ∗ : P(X) × P(X) → P(X) be defined as A * B = (A – B) ∪ (B – A), ∀ A, B ∈ P(X). Show that the empty set φ is the identity for the operation ∗ and all the elements A of P(X) are invertible with A –1 = A.
Q. 13 Given a non-empty set X, let$* : P(X) \times P(X) \rightarrow P(X)$ be defined as
$A * B = (A - B) \cup (B -A), \;\forall A, B \in P(X).$ Show that the empty set $\phi$ is the
identity for the operation ∗ and all the elements A of P(X) are invertible with
$A^{-1} =A$. (Hint : $(A - \phi) \cup (\phi - A) = A$and $(A -A) \cup (A - A) = A * A = \phi$).
Views
Let $* : P(X) \times P(X) \rightarrow P(X)$ be defined as $A * B = (A - B) \cup (B -A), \;\forall A, B \in P(X).$
Let $A \in P(X)$. Then
$A * \phi = (A - \phi ) \cup (\phi -A) = A\cup \phi =A$
$\phi * A = (\phi - A ) \cup (A -\phi ) = \phi \cup A =A$
$\therefore \, \, \, A*\phi =A= \phi * A$ for all $A \in P(X)$
Thus, $\phi$ is identity element for operation *.
An element $A \in P(X)$ will be invertible if there exists $B\in P(X)$,
such that $A*B=A=B*A$. (here $\phi$ is identity element)
$A * A = (A - A ) \cup (A -A) = \phi \cup \phi =\phi$ $\forall A \in P(X)$
Hence, all elements A of P(X) are invertible with $A^{-1}=A.$
Exams
Articles
Questions
|
2020-02-17 16:25:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9515131711959839, "perplexity": 1816.3411710349405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875142603.80/warc/CC-MAIN-20200217145609-20200217175609-00253.warc.gz"}
|
http://undocumentedmatlab.com/blog/legend-semi-documented-feature/
|
Legend ‘-DynamicLegend’ semi-documented feature
In one of my projects, I had to build a GUI in which users could interactively add and remove plot lines from an axes. The problem was that the legend needed to be kept in constant sync with the currently-displayed plot lines. This can of course be done programmatically, but a much simpler solution was to use legend‘s semi-documented ‘-DynamicLegend’ feature. Here’s a simple example:
x=0:.01:10; plot(x, sin(x), 'DisplayName','sin'); legend('-DynamicLegend'); hold all; % add new plot lines on top of previous ones plot(x, cos(x), 'DisplayName','cos');
We can see how the dynamic legend automatically keeps in sync with its associated axes contents when plot lines are added/removed, even down to the zoom-box lines… The legend automatically uses the plot lines ‘DisplayName’ property where available, or a standard ‘line#’ nametag where not available:
Dynamic legend
DynamicLegend works by attaching a listener to the axes child addition/deletion callback (actually, it works on the scribe object, which is a large topic for several future posts). It is sometimes necessary to selectively disable the dynamic behavior. For example, in my GUI I needed to plot several event lines which looked alike, and so I only wanted the first line to be added to the legend. To temporarily disable the DynamicLegend listener, do the following:
% Try to disable this axes's legend plot-addition listener legendAxListener = []; try legendListeners = get(gca,'ScribeLegendListeners'); legendAxListener = legendListeners.childadded; set(legendAxListener,'Enable','off'); catch % never mind... end % Update the axes - the legend will not be updated ... % Re-enable the dynamic legend listener set(legendAxListener,'Enable','on');
Unfortunately, this otherwise-useful DynamicLegend feature throws errors when zooming-in on bar or stairs graphs. This can be replicated by:
figure; bar(magic(4)); %or: stairs(magic(3),magic(3)); legend('-DynamicLegend'); zoom on; % Now zoom-in using the mouse to get the errors on the Command Window
The fix: modify %MATLABROOT%\toolbox\matlab\scribe\@scribe\@legend\init.m line #528 as follows:
%old: str = [str(1:insertindex-1);{newstr};str(insertindex:length(str))]; %new: if size(str,2)>size(str,1) str=[str(1:insertindex-1),{newstr},str(insertindex:length(str))]; else str=[str(1:insertindex-1);{newstr};str(insertindex:length(str))]; end
The origin of the bug is that bar and stairs generate hggroup plot-children, which saves the legend strings column-wise rather than the expected row-wise. My fix solves this, but I do not presume this solves all possible problems in all scenarios (please report if you find anything else).
Semi-documented
The DynamicLegend feature is semi-documented. This means that the feature is explained in a comment within the function (which can be seen via the edit(‘legend’) command), that is nonetheless not part of the official help or doc sections. It is an unsupported feature originally intended only for internal Matlab use (which of course doesn’t mean we can’t use it). This feature has existed many releases back (Matlab 7.1 for sure, perhaps earlier), so while it may be discontinued in some future Matlab release, it did have a very long life span… The down side is that it is not supported: I reported the bar/stairs issue back in mid-2007 and so far this has not been addressed (perhaps it will never be). Even my reported workaround in January this year went unanswered (no hard feelings…).
DynamicLegend is a good example of a useful semi-documented feature. Some other examples, which I may cover in future posts, include text(…,‘sc’), drawnow(‘discard’), several options in pan and datacursormode etc. etc.
There are also entire semi-documented functions: many of the uitools (e.g., uitree, uiundo), as well as hgfeval and others.
Have you discovered any useful semi-documented feature or function? If so, then please share your finding in the comments section below.
Related posts:
1. Multi-column (grid) legend This article explains how to use undocumented axes listeners for implementing multi-column plot legends...
2. Undocumented feature list A list of undocumented MATLAB features can be retrieved. Here's how... ...
3. Transparent legend Matlab chart legends are opaque be default but can be made semi- or fully transparent. ...
4. Undocumented feature() function Matlab's undocumented feature function enables access to some internal experimental features...
5. Plot performance Undocumented inner plot mechanisms can be used to significantly improved plotting performance...
6. Context-Sensitive Help Matlab has a hidden/unsupported built-in mechanism for easy implementation of context-sensitive help...
Print
17 Responses to Legend ‘-DynamicLegend’ semi-documented feature
1. Thanks for this tip.
I was in a similar need, and was getting the legend cell via get, appending the new string, and setting the cell back.
leghandle = findall(gcf, 'tag', 'legend'); legstr = get(leg,'String'); % ensure legstr is a cell, not a string if ischar(legstr), legstr = mat2cell(legstr); end legstr(end+1) = {'New legend string'};
This seems much simpler.
2. Pietro says:
I was looking for an undocumented matlab feature named ‘feature’ and google luckied me here. Maybe you know something about. I tried
which feature
and I found out this ‘feature’ is an undocumented built-in function.
How I got across it?
I typed configinfo.m M-file attached to Matlab white paper on performance.
Nice blog, useful even for a beginner in Matlab as I am.
• Yair Altman says:
Thanks Pietro.
I plan to write a post about some of feature‘s features in the future. Keep a look-out for this on this blog. You can see this in my TODO list.
So much to do, so little time…
Yair
3. Pingback: uitree | Undocumented Matlab
4. Stephan says:
I was searching for a way to use a legend’s “refresh” function (that is available when right-clicking on the legend with the mouse) in a script. When I came across this post I first thought this was the way Matlab implemented the “refresh” functionality.
However, ‘DynamicLegend’ only seems to react to additions or deletions to the axes: switching the ‘Visibility’ of a line to ‘off’ or switching the legend entry off by
set(get(get(line_handle,'Annotation'),'LegendInformation'),'IconDisplayStyle','off')
is ignored by the ‘DynamicLegend’ feature. The “refresh” function correctly removes the corresponding legend entry in these cases.
Do you know how to use “refresh” from the command line?
• @Stephan – yes: You can access the Refresh uicontextmenu item (and any other context-menu item) directly from the legend axes, and then run it via the hgfeval function, as follows:
hLegend = findall(gcf,'tag','legend'); uic = get(hLegend,'UIContextMenu'); uimenu_refresh = findall(uic,'Label','Refresh'); callback = get(uimenu_refresh,'Callback'); hgfeval(callback,[],[]);
For the record, this invokes the refresh_cb function within %matlabroot%/toolbox/matlab/scribe/@scribe/@legend/methods.m. You can place a breakpoint there to see exactly what it does internally.
5. Andy says:
I got the same Problem/Question as Stephan I do plot all data in the Fig however only some data is visible but in the legend all data even the one not visible is listed. Only by right klick an refresh I can get rid of them however I can’t do that with every single file….
Hope there will be a solution in the future.
• @Stephan and @Andy – the Legend’s DynamicLegend functionality only listens to newly-created axes children (in %matlabroot%/toolbox/matlab/scribe/@scribe/@legend/init.m line 102 [for R2012aPR]), and axes children deletions (in %matlabroot%/toolbox/matlab/scribe/@scribe/@legend/methods.m line 1281 [R2012aPR again]).
It should be relatively easy to modify any of these two places to listen to changes in the Visible property of axes children. I described the mechanism for doing this in several articles (here for example).
6. sunny says:
wow, i never knew there was a function like this and i have been thinking on how to program it. thanks, this is gonna save me lots of time
7. Teodor says:
Hi. Thank you for this post. At the moment I am a bit stuck as trying to get the ‘ScribeLegendListeners’ throws a ‘MATLAB:class:InvalidProperty’ exception. There seem to be no such property…
My University is using Matlab-R2011a. Could it be something related to their installation?
What I am doing is using -DynamicLegend for plotting data in multiple iterations. However, I need to remove from the legend some plotted lines.
Could you recommend other solutions? Thank you.
• @Teodor – try placing a drawnow after plotting the lines and before trying to access the hidden ScribeLegendListeners property. It is also possible that you are trying to access this property on a handle of another object (maybe one of the line plots, or a figure, or another axes), rather than the handle of the axes that holds the plots for the requested legend.
8. Pingback: treeTable | Undocumented Matlab
|
2015-05-29 14:06:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.290630966424942, "perplexity": 4852.487424459442}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930143.90/warc/CC-MAIN-20150521113210-00015-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://dash.harvard.edu/handle/1/8191181
|
# Predicting Peptides Binding to MHC Class II Molecules Using Multi-objective Evolutionary Algorithms
Title: Predicting Peptides Binding to MHC Class II Molecules Using Multi-objective Evolutionary Algorithms Author: Rajapakse, Menaka; Schmidt, Bertil; Feng, Lin; Brusic, Vladimir Note: Order does not necessarily reflect citation order of authors. Citation: Rajapakse, Menaka, Bertil Schmidt, Lin Feng, and Vladimir Brusic. 2007. Predicting peptides binding to MHC class II molecules using multi-objective evolutionary algorithms. BMC Bioinformatics 8: 459. Full Text & Related Files: 2212666.pdf (387.5Kb; PDF) Abstract: Background: Peptides binding to Major Histocompatibility Complex (MHC) class II molecules are crucial for initiation and regulation of immune responses. Predicting peptides that bind to a specific MHC molecule plays an important role in determining potential candidates for vaccines. The binding groove in class II MHC is open at both ends, allowing peptides longer than 9-mer to bind. Finding the consensus motif facilitating the binding of peptides to a MHC class II molecule is difficult because of different lengths of binding peptides and varying location of 9-mer binding core. The level of difficulty increases when the molecule is promiscuous and binds to a large number of low affinity peptides. In this paper, we propose two approaches using multi-objective evolutionary algorithms (MOEA) for predicting peptides binding to MHC class II molecules. One uses the information from both binders and non-binders for self-discovery of motifs. The other, in addition, uses information from experimentally determined motifs for guided-discovery of motifs. Results: The proposed methods are intended for finding peptides binding to MHC class II I-A$$^{g7}$$ molecule – a promiscuous binder to a large number of low affinity peptides. Cross-validation results across experiments on two motifs derived for I-A$$^{g7}$$ datasets demonstrate better generalization abilities and accuracies of the present method over earlier approaches. Further, the proposed method was validated and compared on two publicly available benchmark datasets: (1) an ensemble of qualitative HLA-DRB1*0401 peptide data obtained from five different sources, and (2) quantitative peptide data obtained for sixteen different alleles comprising of three mouse alleles and thirteen HLA alleles. The proposed method outperformed earlier methods on most datasets, indicating that it is well suited for finding peptides binding to MHC class II molecules. Conclusion: We present two MOEA-based algorithms for finding motifs, one for self-discovery and the other for guided-discovery by experimentally determined motifs, and thereby predicting binding peptides to I-A$$^{g7}$$ molecule. Our experiments show that the proposed MOEA-based algorithms are better than earlier methods in predicting binding sites not only on I-A$$^{g7}$$ but also on most alleles of class II MHC benchmark datasets. This shows that our methods could be applicable to find binding motifs in a wide range of alleles. Published Version: doi://10.1186/1471-2105-8-459 Other Sources: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2212666/pdf/ Terms of Use: This article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA Citable link to this page: http://nrs.harvard.edu/urn-3:HUL.InstRepos:8191181 Downloads of this work:
|
2018-05-27 01:39:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1892559975385666, "perplexity": 5259.636045999313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867977.85/warc/CC-MAIN-20180527004958-20180527024958-00497.warc.gz"}
|
http://crypto.stackexchange.com/questions?pagesize=15&sort=active
|
# All Questions
9 views
### Elgamal with secret key equal to zero possible?
From various sources (e.g. this paper, page 3), the key generation algorithm of Elgamal samples the secret key $x$ from $\mathbb{Z}_q$, which is identifiable to $\{0, 1, 2, \dots, q-1\}$. My ...
2k views
### Is HTTPS secure if someone snoops the initial handshake?
Let's say I'm on an open wireless network that's being actively sniffed and I connect to an HTTPS site. Even though my subsequent traffic is encrypted, couldn't the sniffer use the data from the ...
84 views
### To what extent is WhatsApp's statement on secure messaging realistic?
As those of you who use WhatsApp Messenger probably know, with their recent update chats and calls are now encrypted. Here's what WhatsApp claims: Messages you send to this chat and calls are now ...
827 views
### Additive ElGamal cryptosystem using a finite field
I'm trying to implement a modified version of the ElGamal cryptosystem as specified by Cramer et al. in "A secure and optimally efficient multi-authority election scheme", which possesses additive ...
49 views
### What would happen to AES if we replaced MixColumns with ShiftColumns?
I have this as a question in an assignment and I guess I don't fully understand the steps. I understand that if we omit MixColumns, then every byte of the ...
134 views
### Polynomials and efficient computability
In public key crypto, the popular definitions of security (CPA, CCA1,2) depend on PPT adversaries. I'm trying to understand why adversaries should be PPT. It's clear that adversaries should be at ...
116 views
### The difficulty of computing discrete logs
I understand that in Diffie-Hellman it should be hard to compute $a$ given $g$ and $g^a$. In computational Diffie-Hellman, it appears to be hard to compute $(g^{ab})$ from $g^a$ and $g^b$. As for ...
91 views
### What happens if no final subtraction is done in Montgomery multiplication?
I'm doing Montgomery arithmetic modulo $N = 2^{255}-19$ for the Curve25519, picking $R = 2^{256}$ for Montgomery. After multiplying two numbers $0 \leq A,B < N$ in the Montgomery representation ...
8 views
### What happens if someone implements tree-base signature scheme incorrectly?
I'm learning about signature schemes and read about Merkele's tree-base signature scheme. There are paragraphs about how the key and signings are generated. Using so-called tree structure and pseudo ...
13 views
### Unknown Hash or Cipher [on hold]
I'm trying to identify what method was used to hash or encrypt a string of text, I have the plain text and the output, however, I have been unable to identify the method used to produce the output. ...
219 views
### Zero knowledge-proof for discrete log that is not honest-verifier
Take a cyclic group of prime order. The Schnorr-protocol for proving knowledge of the discrete logarithm of some group element is honest-verifier zero-knowledge, meaning that if the verifier chooses ...
35 views
### Can we do modulus switching for number theoretic encryption?
Can we do modulus switching for number theoretic encryption such as Paillier or ElGamal?
20 views
### How to find the time complexity of modular multiplication? [duplicate]
There are two number of length m bits. How do I prove that the complexity of modular multiplication of these two numbers is $O(m^2)$.
44 views
### What is the private key in RSA?
I'm new to cryptography and I have a doubt: I read some pages a bit different definition for the RSA private key: In 1 - (n, d) In 2 - ...
108 views
### Authentication protocol which detects client impersonation from another machine
Consider following authentication protocol. Client ClientA knows some initial secret KeyA shared with server. Client also knows IP of legitimate server. Now when client wants to establish connection ...
25 views
### improve of cryptography algorithm [on hold]
This is my algorithm.. ...
32 views
### Boolean functions in cryptography
I recently started becoming interested in Boolean functions. Because they are defined as $f: \{0, 1\}^n \rightarrow \{0, 1\}$, or in other words only over $\{0, 1\}$, I guessed they can somehow be ...
21 views
### How do attacks on WEP work?
There is an abundance of tools and tutorials on how to break WEP encryption. However, I fail to find a nice resource that gives a clear break-down of why the attacks are possible. For example, I know ...
609 views
### RSA encryption using multiplication
Generally in RSA we encrypt as $m^e \pmod n$. Will RSA work if we replace the power by normal multiplication? $E = (m \times e) \mod n$ and decryption as $c \times d \mod n$. What will be $d$ ...
62 views
### Is correlation in vector distributions “dangerous”?
Consider the two vector distributions $\xi,\chi$ described below, each one outputting integer vectors of length $n$ with coefficients in $\{0,\dots,n\}$. Distribution $\xi$ samples each coefficient ...
27 views
### Symmetric cipher speed (AES vs Camellia vs Twofish)
If I have understand correct, Twofish is more secure (harder to break) than AES and Camellia, but Twofish is slower than AES and Camellia. But how can I mesure the speed difference between AES vs ...
18 views
### Currently Best Integer not bit FHE [on hold]
HElib encrypts and evaluates on bits. Are there any FHE that evaluates on encrypted integers? If there are more than one, which one to choose?
603 views
### Generation of a cyclic group of prime order
I am trying to implement a cyclic group generator in Java, but I am running into some issues. In many cryptosystems, the following phrase is expressed during the key generation stage. Let G be ...
53 views
### Understanding the Hidden Subgroup Problem specific to Integer Factorization
I've been reading about the Hidden Subgroup Problem (HSP), specifically trying to understand how it is related to the integer factorization problem. I've read What exactly is the impact of the hidden ...
184 views
### Formal verification in cryptography
I have seen in some places that people use formal verification and/or computer-aided verification for cryptography (tools like ProVerif, CryptoVerif, etc.). How do these approaches work?
29 views
### Modelling crypto protocols using Formal methods
How to start working with SPIN/Promela for modelling cryptographic protocols using formal method such as LTL?
73 views
### How to choose the reduction function in rainbow tables?
I have a question regarding Reduction Functions in Rainbow Tables. If the hashing function is MD5 or SHA-1 etc then should the reduction functions also be MD5 or SHA-1? That is, should the Reduction ...
44 views
### How secure is the OTR protocol?
I am using Adium and wondering how secure it is. I believe it uses AES-256 (I do not understand what that means) by default. It also requests that users exchange fingerprints with each other before ...
52 views
### Is the use of a 4-round Blake2b permutation in OPP justified?
The MEM-AEAD construction uses a 4-round Blake2b permutation in masked Even-Mansour mode as a (tweakable) block cipher. 4 rounds of the Blake2b permutation are already broken to my knowledge. Why ...
47 views
### Why are hash functions like SpookyHash and MurmurHash so highly collision resistant?
Why are hash functions like SpookyHash and MurmurHash so highly collision resistant? I tried to test these hash functions using a few billion input messages and I found that there is no hash value ...
64 views
### Is this cascading encryption, and is my security weakened as a result?
I'm trying to understand what is, and what is not, considered cascading encryption. All of the posts and conversations I've read over the past few days discourage the practice, including this article. ...
326 views
### mode of operation in cryptography
We need to choose a mode of operation for a Telnet like application where the average message size is between 7 and 1024 bytes. What is the best mode of operation in this case? a. CBC b. CFB c. ...
10 views
### Are there any free to use video and audio chat programs that offer end-to-end encryption that do not let the service know of the content? [migrated]
Skype is compromised, FaceTime is compromised, and I believe Ovoo is compromised. Does anyone know of any genuinely free (not free-to-try or in-app purchase-type applications) that offer end-to-end ...
77 views
### What does the TLS 1.2 client finished message contain?
I am implementing TLS 1.2 and I'm stuck on the client finished message. My question is: what is the size and structure of a clients finished message in TLS 1.2 when using the ...
2k views
### Why should I make my cipher public?
As I understand it, the less people know about the internals of my protocol or cipher, the more secure the protocol is. However Kerckhoffs's principle states that A cryptosystem should be secure ...
69 views
### Practical differences between circuits and turing machines for cryptography
In formal cryptography, we model algorithms (mostly our adversaries) as (Probabilistic) Turing Machines or as boolean circuits. In our lecture on formal cryptography, we learned that circuits are more ...
40 views
### How is decryption done in AES CTR mode?
Since AES CTR mode uses a unique IV and counter to produce the key to XOR with the plain text to get the ciphertext, the question is so as to how decryption is done. Since AES CTR produces a ...
61 views
### Extracting key bits from linear cryptanalysis equation for SDES
From the linear cryptanalysis of SDES, we get a linear equation consisting the K[1, 3] of the round key 1 and 2. From this how will I retrieve the key bit? How do we solve the linear equation we get ...
31 views
### How robust is my coded output?
I create codes and ciphers and as a hobby and I was wondering if there was any outfit that would 'test' your output to see how resilient it is. Is there a group anywhere that will accept code, try to ...
66 views
### Backdoor in NIST elliptic curves
Let $E$ be an elliptic curve defined over a finite field $F_q$ with prime order $n$ and $P,Q \in E$ and $k$ be private key such that $kP=Q$. Since $n$ is prime, $E$ is isomorphic to $Z_n$. Suppose ...
12 views
### Shamir secret sharing and homomorphism [on hold]
Problem: Implement Shamir’s (k, n) Secret Sharing (SSS) scheme, with $k = 3, n = 5$, to (a) Find shares of a given set of numbers. (b) Compute average from shares (c) Choose any three averages ...
4 views
11 views
### One time Key Encapsulation Mechanism?
For KEM's, do you only exchange a private key once via the public key encryption and then do all further encryption with this private key, or is a fresh private key sent out with each message? The ...
44 views
### How solid is this method of encryption? [on hold]
Let's say your message is "This is a message." Now translate it into numbers. a=1 b=2 c=3 so on and so forth. Now take those numbers and turn them into three digits based on a randomly generated ...
763 views
### How can mega store my login details and still be secure?
I understand how Mega's encryption works. For a quick summary of all those in the future looking for an answer on this... here is how it works: Upon first signing up for an account you make a ...
433 views
### Security of tokenization of plain text conversations - cryptanalysis
I came across a marketing video here. They claim to perform AES encryption and tokenization of sensitive data, at the corporate gateway, before it leaves the company firewall destined for the public ...
45 views
### Best known attack
I always have listened the term best known attack but I have doubts respect to this term, becase there are several kinds of attacks for example structural attacks, inversion attacks. It is possible to ...
11 views
### Can someone please decrypt “NUZGIEJX” using Vigenére [on hold]
The key is "FUN". I just want to make sure I am doing it correctly.
|
2016-04-29 12:28:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.696712076663971, "perplexity": 2499.463572406125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111324.43/warc/CC-MAIN-20160428161511-00181-ip-10-239-7-51.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/219720-find-angle-between-vector-print.html
|
# Find Angle Between Vector
• June 10th 2013, 10:57 AM
Mc3
Find Angle Between Vector
<3,1> x <4,-5>
• June 10th 2013, 11:33 AM
Plato
Re: Find Angle Between Vector
Quote:
Originally Posted by Mc3
<3,1> x <4,-5>
The angle between $\vec{A}~\&~\vec{B}$ is $\arccos \left( {\frac{{\vec{A}\cdot\vec{B}}}{{\|\vec{A}\|~\|\vec{ B}\|}}} \right)$
• June 10th 2013, 04:13 PM
Mc3
Re: Find Angle Between Vector
I tried cos^1 7/ square root 10 x square root 41 but it says error on my calc?
• June 10th 2013, 04:30 PM
|
2015-08-31 20:25:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9209287166595459, "perplexity": 10937.46018575279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066586.13/warc/CC-MAIN-20150827025426-00086-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/shaft-design-problem.603455/
|
# Homework Help: Shaft Design Problem
1. May 5, 2012
### firebird90
I need to design a shaft that have a specific torsional load acting on it. I have the safety factor of 2 and have a table of material properties (yield stress, ultimate stress, shear modulus, modulus of elasticity etc.).
I need to find the diameter of the shaft that resist the specifies torque. I have the formulas of $\tau$max=Tc/J and angle of twist formula. As the angle of twist formula is related to the length its useless because I don't have any specified. I need to get a $\tau$max by using material properties but I couldn't find anything to relate them up to now. I have searched many shaft design documents but no results and also I am confused right now.
Last edited: May 5, 2012
2. May 5, 2012
3. May 5, 2012
### firebird90
Now I can make a proof for shear stress with max distortion energy theorem. So I am going to get a yield shear stress from the theorem and using it with factor of safety I will get my desired $\tau$max to use it in torsion formula, with using the yield stress to find diameter, I will get a diameter free of plastic deformation. Am I right with that?
4. May 5, 2012
### PhanthomJay
I don't know why you need a proof for shear stress when it's max value can be looked up in a table of material properties. For steel, it's about 0.6 Fy, where Fy is the tensile yield stress. So with a SF of 2, allowable shear stress would be about 0.3Fy and you needn't worry about plastic deformation.
5. May 6, 2012
### firebird90
The proof is my biggest problem with my work, Because it must be a project style showing how can I derived this 0.6Fy. I need a source to show it.
6. May 6, 2012
### PhanthomJay
The Steel Code I use calculates the ultimate shear stress as the tensile yield stress divided by square root of 3, which rounds to 0.6 Fy. I do not know if that value comes from the distortion energy theorem to which you refer.
|
2018-07-16 03:58:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44820457696914673, "perplexity": 823.9121177663098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589172.41/warc/CC-MAIN-20180716021858-20180716041858-00185.warc.gz"}
|
https://msp.org/gt/2004/8-3/index.xhtml
|
Volume 8, issue 3 (2004)
Recent Issues
The Journal About the Journal Editorial Board Editorial Interests Editorial Procedure Subscriptions Submission Guidelines Submission Page Policies for Authors Ethics Statement ISSN (electronic): 1364-0380 ISSN (print): 1465-3060 Author Index To Appear Other MSP Journals
Lens space surgeries and a conjecture of Goda and Teragaito
1013
Jacob Rasmussen
Weighted $L^2$–cohomology of Coxeter groups based on barycentric subdivisons
1032
Boris Okun
The surgery obstruction groups of the infinite dihedral group
1043
Francis X Connolly and James F Davis
Homotopy Lie algebras, lower central series and the Koszul property
1079
Ştefan Papadima and Alexander I Suciu
Unimodal generalized pseudo-Anosov maps
1127
André de Carvalho and Toby Hall
A field theory for symplectic fibrations over surfaces
1189
François Lalonde
Tetra and Didi, the cosmic spectral twins
1227
Peter G Doyle and Juan Pablo Rossetti
Cylindrical contact homology of subcritical Stein-fillable contact manifolds
1243
Mei-Lin Yau
The proof of Birman's conjecture on singular braid monoids
1281
Luis Paris
On groups generated by two positive multi-twists: Teichmueller curves and Lehmer's number
1301
Christopher J Leininger
Commensurations of the Johnson kernel
1361
Tara E Brendle and Dan Margalit
Noncommutative localisation in algebraic $K$–theory I
1385
Amnon Neeman and Andrew Ranicki
Limit groups and groups acting freely on $\mathbb{R}^n$–trees
1427
Vincent Guirardel
Morita classes in the homology of automorphism groups of free groups
1471
James Conant and Karen Vogtmann
Publication of this issue is now complete.
|
2022-09-26 01:05:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4258422255516052, "perplexity": 12257.39827315636}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00588.warc.gz"}
|
http://tex.stackexchange.com/questions/35157/odd-interaction-of-secnumdepth-and-section-based-counters/35163
|
# Odd interaction of secnumdepth and section-based counters
In the following MWE:
\documentclass{book}
\newcounter{exnum}[subsection]
\begin{document}
\chapter*{Chapter}
\section*{Section 1}
\subsection{Subsection A}
\setcounter{exnum}{1}
Exnum is \theexnum.
\subsection{Subsection B}
Exnum is \theexnum.
\end{document}
I get the output I expected; namely, the line printed in Subsection A is Exnum is 1 and in Subsection B is Exnum is 0. However, in the actual code, I didn't want subsection numbers printed, so I changed both occurrences of \subsection to \subsection*. The output then consisted of Exnum is 1 in both subsections. As a workaround, starting from the code above, I instead added the line \setcounter{secnumdepth}{1} at the top, just after the documentclass, to prevent printing the subsection numbers, but left the \subsections unstarred. Again the output was Exnum is 1 in both subsections.
What is going on here? Are these two manifestations of the same failure, or are they different problems? And, what don't I understand about what secnumdepth does? I thought it simply prevented printing of section numbers, but clearly it does much more than that.
-
Well, if the subsection counter is not incremented (and that will be the case if you use \subsection* or \setcounter{secnumdepth}{1}) then your counter won't be reset.
Using the titlesec package you can redefine your subsections to suppress the numbering from the document, but to still internally increase the counter, so your new counter will still be reset when a new subsection is created:
\documentclass{book}
\usepackage{titlesec}
\newcounter{exnum}[subsection]
\titleformat{\subsection}
{\normalfont\large\bfseries}{}{0pt}{}
\begin{document}
\chapter*{Chapter}
\section*{Section 1}
\subsection{Subsection A}
\setcounter{exnum}{1}
Exnum is \theexnum.
\subsection{Subsection B}
Exnum is \theexnum.
\end{document}
-
Interesting. The documentation for secnumdepth (and subsection*) don't mention actually not incrementing the counter - they just say the counter is not printed. Anyway, this worked fine. Thanks. – rogerl Nov 23 '11 at 16:29
let the \subsection itself reset the counter:
\documentclass{book}
\let\SubSection\subsection
\def\subsection{\setcounter{exnum}{0}\SubSection}
\newcounter{exnum}
\begin{document}
\chapter*{Chapter}
\section*{Section 1}
\subsection*{Subsection A}
\setcounter{exnum}{1}
Exnum is \theexnum.
\subsection*{Subsection B}
Exnum is \theexnum.
\end{document}
-
Change the way numbers are printed:
\makeatletter
\def\@seccntformat#1{%
\def\@seccntformatsubsection#1#2{}
\makeatother
The usual definition is
\def\@seccntformat#1{\csname the#1\endcsname\quad}
and we simply add a new command before this; since we are defining only \@seccntformatsubsection, when LaTeX will try \@seccntformatsection it will consider it as \relax; when it wants to typeset a subsection, it will be presented (after the first expansion) with
\@seccntformatsubsection\thesubsection\quad
and \@seccntformatsubsectio will gobble the two tokens. Setting \setcounter{secnumdepth}{2} will "number" the subsections, but \subsection{A title} will eventually print without the number.
-
|
2014-03-16 11:30:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.935185968875885, "perplexity": 4972.263546177583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678702437/warc/CC-MAIN-20140313024502-00059-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/geometry/elementary-geometry-for-college-students-5th-edition/chapter-2-section-2-5-convex-polygons-exercises-page-105/7b
|
## Elementary Geometry for College Students (5th Edition)
Published by Brooks Cole
# Chapter 2 - Section 2.5 - Convex Polygons - Exercises: 7b
#### Answer
1440$^{\circ}$
#### Work Step by Step
S=(10-2)$\times$180 S=(8)$\times$180 S=1440$^{\circ}$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2018-08-18 01:42:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5926805734634399, "perplexity": 6528.376113130438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213247.0/warc/CC-MAIN-20180818001437-20180818021437-00165.warc.gz"}
|
http://llvm.org/doxygen/structllvm_1_1AnalysisSetKey.html
|
LLVM 8.0.0svn
llvm::AnalysisSetKey Struct Reference
A special type used to provide an address that identifies a set of related analyses. More...
#include "llvm/IR/PassManager.h"
## Detailed Description
A special type used to provide an address that identifies a set of related analyses.
These sets are primarily used below to mark sets of analyses as preserved.
For example, a transformation can indicate that it preserves the CFG of a function by preserving the appropriate AnalysisSetKey. An analysis that depends only on the CFG can then check if that AnalysisSetKey is preserved; if it is, the analysis knows that it itself is preserved.
Definition at line 80 of file PassManager.h.
The documentation for this struct was generated from the following file:
|
2018-09-20 14:28:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3762534260749817, "perplexity": 1792.4073959582279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156513.14/warc/CC-MAIN-20180920140359-20180920160759-00132.warc.gz"}
|
https://physics.stackexchange.com/questions/195710/how-to-derive-atomic-hamiltonian-and-cavity-hamiltonian
|
# How to Derive Atomic Hamiltonian and Cavity Hamiltonian?
In a Fundamental of Quantum Optics and Quantum Information book which I am reading, it states without explanation that, in a two-level atomic configuration in a cavity system, the
• Atomic Hamiltonian is given by, $$H^{\mathcal{A}}=\hbar\omega_{g}\lvert g\rangle\langle g\rvert+\hbar\omega_{e}\lvert e\rangle\langle e\rvert$$
• Hamiltonian for cavity is, $$H^{\mathcal{F}}=\hbar\omega\hat{a}^{\dagger}\hat{a}$$
where, $\omega_{g}$ and $\omega_{e}$ is frequencies associated with atomic level $\lvert g\rangle$ and $\lvert e\rangle$ respectively, $\omega$ is the frequency of the cavity mode with near resonant with $\omega_{eg}=\omega_{e}-\omega_{g}$, and $\hat{a}^{\dagger}$ and $\hat{a}$ are creation and annihilation operators.
Could you tell me how to derive the relations of both atomic Hamiltonian and cavity field Hamiltonian?
P.S. I apology for the image. I can't find a way to zoom it out.
• To typeset Dirac notation use, for example, \lvert a \rangle, which produces $\lvert a \rangle$. Derivation of the atomic Hamiltonian is trivial, it is just how one writes in Dirac notation a general $2\times 2$ matrix whose eigenvectors are $\lvert e\rangle$, $\lvert g\rangle$ with eigenvalues $\hbar \omega_{e,g}$. I'll write up a derivation of the field Hamiltonian later unless someone wants to jump in first. Jul 24 '15 at 7:01
• @MarkMitchison Thank for the tips. I thought latex style code can be used here as well.
– TBBT
Jul 24 '15 at 7:05
• You can use LaTeX style code pretty much everywhere. As far as I know \ket{} is not a standard LaTeX macro and you would need to define it yourself. Jul 24 '15 at 7:06
• By relation, do you mean the Hamiltonians you've stated, or a possible interaction (which would "relate" the two systems)? Jul 24 '15 at 7:50
• @Daniel No not the interaction Hamiltonian $\mathcal{V}^{\mathcal{AF}}$ that I want to derive. Just the atomic and cavity field Hamiltonians relation that I stated in my question.
– TBBT
Jul 24 '15 at 7:57
As said in the commentaries, the first one comes from the Dirac formalism. Simply put, it deals with quantum states as vectors $\lvert a \rangle$ whose components contain the projections of the system into different eigenstates. For a system, the set of state eigenvectors $\lvert a_i \rangle$ must be linearly independent $\langle a_i \lvert a_j \rangle=\delta_{ij}$ which is the inner vector product, and is equivalent to $\int \psi_i \psi_j dx^3$. And then there is the transition or projection operator, which results from outer vector product $\lvert a \rangle\langle b \lvert$ and yields a matrix characterizing the transition between states. Finally if you have an operator $\hat A$, represented by a matrix, you can find its components like $\langle a \lvert \hat A \lvert b \rangle$, where is easy to see that when expressed in terms of its eigenvectors, all non-diagonal values are zero and the diagonal contains the eigen values ($\langle a_i \lvert \hat A \lvert a_j \rangle$ = 0, $\langle a_i \lvert \hat A \lvert a_i \rangle= a_i$)
Now the case of the atom is similar to the harmonic oscillator, and basically that of any system dealing with confined particles: discrete levels and discrete energy steps to change from one to the other. So this allows to write the Hamiltonian for this kind of systems like counting total energy: if level k is occupied, then add $E_k$ to the system energy. This is what the atomic Hamiltonian represents: if you have the system in a state $\lvert \psi \rangle = c_g \lvert g \rangle + c_e \lvert e \rangle$ you will get $\langle \psi \lvert H^A\lvert \psi \rangle = (c_g^* \langle g \lvert + c_e^* \langle e \lvert) (\hbar \omega_g \lvert g \rangle \langle g \lvert+ \hbar \omega_e \lvert e \rangle \langle e \lvert)(c_g \lvert g \rangle + c_e \lvert e \rangle)$ which you will see to get the mean values for the energy of the system in this state with $p_e=\sqrt{c_e c_e^*}$ and $p_g=\sqrt{c_g c_g^*}$ are the probabilities for each state, or the projections of $\lvert \psi \rangle$ into the eigenstates $\lvert e \rangle$ and $\lvert g \rangle$.
The second one is similar, but here there can be many photons in one state. So the total energy inside the cavity is a counting of how many photons are inside, and the creation and annihilation operators can be seen here for the harmonic oscillator (probably relevant here) and other examples.
• This doesn't explain why only one light mode is considered (for which there is no explanation prior to interaction and rotating wave approximation, which should be explained imho) Jul 24 '15 at 20:26
• Since we are considering a two-level atom we can expect only one energy for the photons. And I don't think you can deduce that result since it is a consequence of the postulates of Quantum Mechanics. In other words, is one of the experimental facts taken without explanation, it does not fall into the area of explained phenomena within QM. Jul 24 '15 at 23:18
|
2021-09-22 07:52:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8967411518096924, "perplexity": 262.7383141717074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057337.81/warc/CC-MAIN-20210922072047-20210922102047-00337.warc.gz"}
|
http://schresdev.com/2xo0oj7/directrix-calculator-ellipse-34f52a
|
Select Page
Author: Catherine Joyce. Pour une ellipse, elle est calculée par la formule x = ± b / e où x est la directrice d'une ellipse lorsque a est le grand axe, b est le grand axe et e est l'excentricité de l'ellipse. See also. The directrix of a conic section is the line that, together with the point known as the focus, serves to define a conic section. Then by definition of ellipse distance SP = e * PM => SP^2 = (e * PM)^2 (x – x1)^2 + (y – y1)^2 = e * ((a*x + b*y + c) / (sqrt (a*a + b*b))) ^ 2 Ellipse, showing x and y axes, semi-major axis a, and semi-minor axis b.. Present calculation used: iterations. Directrices of a hyperbola, directrix of a parabola This constant is the eccentricity. Also, remember the formulas by learning daily at once and attempt all ellipse concept easily in the exams. Major axis is the line segment that crosses both the focal points of the ellipse. Conic Sections: Ellipse with Foci. Directrix est la longueur dans le même plan à sa distance par rapport à une ligne droite fixe, 11 Autres formules que vous pouvez résoudre en utilisant les mêmes entrées, 1 Autres formules qui calculent la même sortie. Hyperbolas and noncircular ellipses have two foci and two associated directrices. For an arbitrary point P {\displaystyle P} of the ellipse, the quotient of the distance to one focus and to the corresponding … You can then upload the saved data (in the Data File) into the ellipse calculator … int VertexSize = ( Sides * Abundance ) + 2; Add this line below the for loop, this will add the last vertex in order to draw the last triangle fan. This constant is the eccentricity. This curve can be a parabola. 1. y = 2 – (3 2 +1)/4(5) y = 2 – (9+1)/20. We can draw an ellipse using a piece of cardboard, two thumbtacks, a pencil, and string. Compute the directrix of a parabola: directrix of parabola x^2+3y=16. Qu'est-ce qu'une directrice et comment est-elle calculée pour une ellipse. This constant ratio is the above-mentioned eccentricity: How to Calculate Directrix of an ellipse (a>b)? • Directrix Y = c - (b 2 + 1)/4a • X Intercept = -b/2a ± √ (b * b - 4ac) /2a,0 Parabola equation and graph with major axis parallel to y axis. An ellipse is the set of all points $\left(x,y\right)$ in a plane such that the sum of their distances from two fixed points is a constant. ellipses. ellipse with the form x^2/a^2 + y^2/b^2 = 1 (a>b>0, and b^2 = a^2 - c^2). Figure $$\PageIndex{12}$$: The three conic … When e = 1, the conic is a parabola; when e < 1 it is an ellipse; when e > 1, it is a hyperbola. An ellipse may also be defined in terms of one focal point and a line outside the ellipse called the directrix: for all points on the ellipse, the ratio between the distance to the focus and the distance to the directrix is a constant. Here is how the Directrix of an ellipse(a>b) calculation can be explained with given input values -> 10000 = 10/0.1. The red circle (e = 0) is included for reference, it does not have a directrix in the plane. The answer is x = +/- a^2/c, but I don't know how to derive that. This calculator will find either the equation of the parabola from the given parameters or the axis of symmetry, eccentricity, latus rectum, length of the latus rectum, focus, vertex, directrix, focal parameter, x-intercepts, y-intercepts of the entered parabola. In the case of the ellipse, the directrix is parallel to the minor axis and perpendicular to the major axis. of an . 11 Other formulas that you can solve using the same Inputs, 1 Other formulas that calculate the same Output. Among them, the parabola in the most common. Question 1 : Identify the type of conic and find centre, foci, vertices, and directrices of each of the following: (i) (x 2 /25) + (y 2 /9) = 1. Directrix is the length in the same plane to its distance from a fixed straight line. The directrix is a fixed line. The first line of the proof states Parabolas. Let P (x, y) be any point on the ellipse whose focus S (x1, y1), directrix is the straight line ax + by + c = 0 and eccentricity is e. Draw PM perpendicular from P on the directrix. example. This calculator will find either the equation of the parabola from the given parameters or the axis of symmetry, eccentricity, latus rectum, length of the latus rectum, focus, vertex, directrix, focal parameter, x-intercepts, y-intercepts of the entered parabola. How to identify a conic section by its equation. Directrice d'une ellipse (b>a) est la longueur dans le même plan à sa distance d'une ligne droite fixe. e = √1 - (4/9) e = √( 5/9) e = √5/3. Ellipse calculator. We can use 1 other way(s) to calculate the same, which is/are as follows -. For a hyperbola (x-h)^2/a^2-(y-k)^2/b^2=1, where a^2+b^2=c^2, the directrix is the line x=a^2/c. To draw this set of points and to make our ellipse, the following statement must be true: if you take any point on the ellipse, the sum of the distances to those 2 fixed points ( blue tacks ) is constant. Free Ellipse calculator - Calculate area, circumferences, diameters, and radius for ellipses step-by-step This website uses cookies to ensure you get the best experience. Finding the length of semi major axis of an ellipse given foci, directrix and eccentricity 12 Prove that the directrix-focus and focus-focus definitions are equivalent Directrix of an ellipse(a>b) calculator uses Directrix=Major axis/Eccentricity to calculate the Directrix, Directrix of an ellipse(a>b) is the length in the same plane to its distance from a fixed straight line. Directrix of an ellipse (a>b) is the length in the same plane to its distance from a fixed straight line. F' = 2nd focus of the hyperbola. The equations of the directrices of a horizontal ellipse are The right vertex of the ellipse is located at and the right focus is Therefore the distance from the vertex to the focus is and the distance from the vertex to the right directrix is This gives the eccentricity as The equations of latus rectum are x = ae, x = − ae. Analytically, an ellipse can also be defined as the set of points such that the ratio of the distance of each point on the curve from a given point (called a focus or focal point) to the distance from that same point on the curve to a given line (called the directrix) is a constant, called the eccentricity of the ellipse. For an ellipse, it is calculated by the formula x=±a/e where x is the directrix of an ellipse when a is the major axis, a is the major axis, and e is the eccentricity of the ellipse. Circumference of an ellipse=((pi*Major axis*Minor axis+(Major axis-Minor axis)^2))/(Major axis/2+Minor axis/2), Focal parameter of an ellipse=Minor axis^2/Major axis, Eccentricity=sqrt(1-((Minor axis)^2/(Major axis)^2)), Radius of the Circumscribed circle=Major axis/2, Flattening=(Major axis-Minor axis)/Minor axis, Latus Rectum=2*(Minor axis)^2/(Major axis), Length of the major axis of an ellipse (b>a), Eccentricity of an ellipse when linear eccentricity is given, Latus rectum of an ellipse when focal parameter is given, Linear eccentricity of ellipse when eccentricity and major axis are given, Linear eccentricity of an ellipse when eccentricity and semimajor axis are given, Semi-latus rectum of an ellipse when eccentricity is given, Length of radius vector from center in given direction whose angle is theta in ellipse, Directrix of an ellipse(a>b) is the length in the same plane to its distance from a fixed straight line and is represented as. The increase of accuracy or the ratio a / b causes the calculator to use more terms to reach the selected accuracy. Related formulas (v) Equation of directrix (vi) Length of latus rectum. The directrix is the vertical line x=(a^2)/c. The sum of the distances for any point P(x,y) to foci (f1,0) and (f2,0) remains constant.Polar Equation: Origin at Center (0,0) Polar Equation: Origin at Focus (f1,0) When solving for Focus-Directrix values with this calculator, the major axis, foci and k must be located on the x-axis. If the major axis is parallel to the x axis, interchange x and y during your calculation. Since b > a, the ellipse symmetric about y-axis. To graph a parabola, visit the parabola grapher (choose the "Implicit" option). Hyperbolas. Ellipse:eccentricityisalways <1 Parabola:eccentricityisalways=1 Hyperbola:eccentricityis >1 Thefixedpointiscalledthe Focus Thefixedlineiscalledthe Directrix Axis isthelinepassingthoughthe focus and perpendicular to the directrix Vertex isapointatwhichtheconic cutsitsaxis VC VF e = 5 • Eccentricityislessthan1. Compute properties of a parabola: parabola with focus (3,4) and vertex (-4,5) parabola (y-2)^2=4x. Each fixed point is called a focus (plural: foci) of the ellipse. of an ellipse with the form x^2/a^2 + y^2/b^2 = 1 (a>b>0, and b^2 = a^2 - c^2). Parabolas have one focus and one directrix. Free Parabola Directrix calculator - Calculate parabola directrix given equation step-by-step This website uses cookies to ensure you get the best experience. Directrix and is denoted by x symbol. Then, make use of these below-provided ellipse concepts formulae list. To use this online calculator for Directrix of an ellipse(a>b), enter Major axis (a) and Eccentricity (e) and hit the calculate button. (a) Find the eccentricity, (b) identify the conic, (c) give an equation of the directrix, and (d) sketch the conic. Therefore, by definition, the eccentricity of a parabola must be 1. (2) Notice that pressing on the sign in the equation of the ellipse or entering a negative number changes the + / − sign and changes the input to positive value. click here for parabola equation solver. Pour une ellipse, elle est calculée par la formule x = ± b / e où x est la directrice d'une ellipse lorsque a est le grand axe, b est le grand axe et e est l'excentricité de l'ellipse. Derive the equation of the directrix (plural = directrices?) How many ways are there to calculate Directrix? Browse other questions tagged game-engine directx-11 ellipse or ask your own question. Directrix of an ellipse(a>b) is the length in the same plane to its distance from a fixed straight line. Directrix of a Parabola. … How to Calculate Directrix of an ellipse(a>b)? See also. that an ellipse is a planar curve with equation $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$). If a>0, parabola is upward, a0, parabola is downward. An ellipse is the locus of a point which moves in such a way that its distance from a fixed point is in constant ratio (<1) to its distance from a fixed line. Directrix of an ellipse(a>b) calculator uses. Here is a simple online Directrix calculator to find the parabola focus, vertex form and parabola directrix. The answer is x = +/- a^2/c, but I don't know how to derive that. Question 1 : Identify the type of conic and find centre, foci, vertices, and directrices of each of the following: (i) (x 2 /25) + (y 2 /9) = 1. Finding Center Foci Vertices and Directrix of Ellipse and Hyperbola - Practice questions. Transformations; Cool Pyramid Design; เศษส่วนที่เท่ากัน Ellipse - Focus and Directrix. Circonférence d'une ellipse=((pi*Grand axe*Axe mineur+(Grand axe-Axe mineur)^2))/(Grand axe/2+Axe mineur/2), Paramètre focal d'une ellipse=Axe mineur^2/Grand axe, Excentricité=sqrt(1-((Axe mineur)^2/(Grand axe)^2)), Aplanissement=(Grand axe-Axe mineur)/Axe mineur, Latus rectum=2*(Axe mineur)^2/(Grand axe), Longueur du grand axe d'une ellipse (a> b), Longueur du grand axe d'une ellipse (b> a), Longueur du petit axe d'une ellipse (a> b), Longueur du petit axe d'une ellipse (b> a), Excentricité d'une ellipse lorsque l'excentricité linéaire est donnée, Latus rectum d'une ellipse lorsque le paramètre focal est donné, Excentricité linéaire lorsque l'excentricité d'une ellipse est donnée, Rectum semi-latus d'une ellipse lorsque l'excentricité est donnée, Axe 'a' de l'ellipse lorsque la zone est donnée, Axe 'b' d'Ellipse lorsque l'aire est donnée, Longueur du rayon vecteur à partir du centre dans une direction donnée dont l'angle est thêta dans l'ellipse, Directrice d'une ellipse (b>a) Calculatrice. Conics includes parabolas, circles, ellipses, and hyperbolas. L'axe principal est le segment de ligne qui traverse les deux points focaux de l'ellipse. Blog What senior developers can learn from beginners. We explain this fully here. Here is a simple online Directrix calculator to find the parabola focus, vertex form and parabola directrix. Finding Center Foci Vertices and Directrix of Ellipse and Hyperbola - Practice questions. Compute the focal parameter of an ellipse: focal parameter of an ellipse with semiaxes 4,3. Given focus(x, y), directrix(ax + by + c) and eccentricity e of an ellipse, the task is to find the equation of ellipse using its focus, directrix, and eccentricity.. This calculator will find either the equation of the ellipse (standard form) from the given parameters or the center, vertices, co-vertices, foci, area, circumference (perimeter), focal parameter, eccentricity, linear eccentricity, latus rectum, length of the latus rectum, directrices, (semi)major axis length, (semi)minor axis length, x-intercepts, y-intercepts, domain, and range … Solution : Equation of ellipse : 9x 2 + 4y 2 = 36 (x 2 /4) + (y 2 /9) = 1. a 2 = 9 and b 2 = 4. a = 3 and b = 2. Equation of Directrix of Ellipse Calculator The line segment which is perpendicular to the line joining the two foci is called the equation of the directrix. WebSockets for fun and profit . The ellipse calculator defaults the number of iterations (Fig 8: SRI) to 1000 which is virtually instant for today’s computers. Place the thumbtacks in the cardboard to form the foci of the ellipse. asked Feb 3, 2015 in CALCULUS by anonymous eccentricity-of-conics Ellipse Focus Directrix. An ellipse is a curve that is the locus of all points in the plane the sum of whose distances r_1 and r_2 from two fixed points F_1 and F_2 (the foci) separated by a distance of 2c is a given positive constant 2a (Hilbert and Cohn-Vossen 1999, p. 2). On cuttheknot.org, a proof is given that the focus-directrix definition implies the equation definition (i.e. A line perpendicular to the axis of symmetry used in the definition of a parabola.A parabola is defined as follows: For a given point, called the focus, and a given line not through the focus, called the directrix, a parabola is the locus of points such that the distance to the focus equals the distance to the directrix. Topic: Ellipse The eccentricity is always denoted by e. Referring to Figure 1, where d F is the distance of point P from the focus F and d D is its distance from the directrix. Directrix of a parabola. FORMULAS Related Links: Partition Coefficient : Parallel Resistance Formula: Mechanical Energy Examples: Area Of … Eccentricity : e = √1 - (b 2 /a 2) Directrix : The fixed line is called directrix l of the ellipse and its equation is x = a/ e . Latus rectum : It is a focal chord perpendicular to the major axis of the ellipse. In this formula, Directrix uses Major axis and Eccentricity. A set of points on a plain surface that forms a curve such that any point on the curve is at equidistant from the focus is a parabola.One of the properties of parabolas is that they are made of a material that reflects light that travels parallel to the axis of symmetry of a parabola and strikes its concave side which is reflected its focus.. Eccentricity of an ellipse is a non-negative real number that uniquely characterizes its shape. Here the vertices of the ellipse are. Discover Resources. The fixed point is called the focus and fixed line is called the directrix and the constant ratio is called the eccentricity of the ellipse, denoted by (e). - [Voiceover] What I have attempted to draw here in yellow is a parabola, and as we've already seen in previous videos, a parabola can be defined as the set of all points that are equidistant to a point and a line, and the point is called the focus of the parabola, and the line is called the directrix of the parabola. The directrix is a fixed line. The eccentricity of an ellipse c/a, is a measure of how close to a circle the ellipse Example Ploblem: Find the vertices, co-vertices, foci, and domain and range for the following ellipses; then graph: (a) 6x^2+49y^2=441 (b) (x+3)^2/4+(y−2)^2/36=1 Solution: Use the Calculator to Find the Solution of this and other related problems. An ellipse is the locus of all those points in a plane such that the sum of their distances from two fixed points in the plane, is constant. The general equation of an ellipse whose focus is (h, k) and the directrix is the line ax + by + c = 0 and the eccentricity will be e is SP = ePM General form: distance between both foci is: 2c . Each of the two lines parallel to the minor axis, and at a distance of = = from it, is called a directrix of the ellipse (see diagram). An ellipse with center at the origin has a length of major axis 20 units. For an ellipse, it is calculated by the formula x=±a/e where x is the directrix of an ellipse when a is the major axis, a is the major axis, and e is the eccentricity of the ellipse. Here the focus is the origin so the x-y co-ordinates of a general point on the ellipse is $$(r \cos(\theta), r \sin(\theta))$$m so the distance of a point on the ellipse from the focus is $$d_f=r$$. ELLIPSE Concept Equation Example Ellipse with Center (0, 0) Standard equation with a > b > 0 Horizontal major axis: Vertical major axis ... Directrix: y = - p x2 = - 2y has 4p = - 2 or p = - The parabola opens downward with vertex (0, 0), focus (0, - ), and directrix y = Parabola with vertex (0, 0) and horizontal axis y = c – (b 2 +1)/4a. However, I can verify that: let the distance between point M(x,y) on the ellipse and focus F (c,0) to the distance between M(x,y) and a point in a line with equation x = a^2/c be … Ellipse (e = 1/2), parabola (e = 1) and hyperbola (e = 2) with fixed focus F and directrix (e = ∞). The ratio of distances, called the eccentricity,… Read More History of Hyperbola. a/e = 9/ √5 In the picture to the right, the distance from the center of the ellipse (denoted as O or Focus F; the entire vertical pole is known as Pole O) to directrix D is p. Directrices may be used to find the eccentricity of an ellipse. 9x 2 +4y 2 = 36. This is an online calculator which is used to find the value of the equation of the directrix of ellipse. Find the equation of ellipse, distance between focus is 8 units and distance between dretrix is 18 units and major axis is X - axis 2 See answers Ashi03 Ashi03 Distance between two foci = ae – (- ae) = 2ae =8 Distance between two directrices =a/e – (-a/e) = 2a/e =18 2ae .2a/e = 8 x 18 4a2 = 144 a2 = 36 a = 6 2ae = 8 Now, the ellipse itself is a new set of points. You may, however, modify this value by opening the ellipse calculator’s Data File (Menu Item; ‘File>Open Data File’), edit the value, taking care not to delete the preceding comma, then save the file. This ellipse calculator comes in handy for astronomical calculations. How to calculate Directrix of an ellipse(a>b) using this online calculator? directrix\:(y-2)=3(x-5)^2; directrix\:3x^2+2x+5y-6=0; directrix\:x=y^2; directrix\:(y-3)^2=8(x-5) directrix\:(x+3)^2=-20(y-1) Each focus F of the ellipse is associated to a line D perpendicular to the major axis (the directrix) such that the distance from any point on the ellipse to F is a constant fraction of its distance from D. This property (which can be proved using the Dandelin spheres) can be taken as another definition of the ellipse. Formally, an ellipse is the locus of points such that the ratio of the distance to the nearer focus to the distance to the nearer directrix equals a constant that is less than one. What is a directrix and how it is calculated for an ellipse ? Or. ae = 3(√5/3) ae = √5. Directrix of an ellipse(a>b) is the length in the same plane to its distance from a fixed straight line is calculated using. Ellipse with center at (x 1, y 1) calculator x 2 ... An ellipse is the locus of all points that the sum of whose distances from two fixed points is constant, d 1 + d 2 = constant = 2a. y = 2 – (10/20) y = 2 – (0.5) y = 1.5. y -1.5 = 0. If the distance from center of ellipse to its focus is 5, what is the equation of its directrix? How to calculate Directrix of an ellipse(a>b)? The three conic sections with their directrices appear in Figure $$\PageIndex{12}$$. In ellipse …a fixed straight line (the directrix) is a constant less than one. The ratio is the eccentricity of the curve, the fixed point is the focus, and the fixed line is the directrix. A(a, 0) and A′(− a, 0). Its distance from the vertex is called p. The special parabola y = x2 has p = 114, and other parabolas Y = ax2 have p = 1/4a.You magnify by a factor a to get y = x2.The beautiful property of a Directrices of a hyperbola, directrix of a parabola 3.5 Parabolas, Ellipses, and Hyperbolas A parabola has another important point-the focus. Problem Answer: The equation of the directrix of the ellipse is x = ±20. On parabola and dive deep into the topic, download BYJU ’ –. The x axis, interchange x and y axes, semi-major axis a, and.... = √ ( 5/9 ) e = √ ( 5/9 ) e =.... ( singular focus ) directrices? parabola and dive deep into the ellipse, the parabola in most... Of points, ellipses, and semi-minor axis b parabola: parabola with focus ( 3,4 and! = 3 ( √5/3 ) ae = √5 an average distance from the Sun of 1.458 astronomical.! Center foci Vertices and directrix of an ellipse with the form x^2/a^2 + y^2/b^2 = (... – ( b > 0, and string in ellipse …a fixed straight line itself is a non-negative real that. Y-K ) ^2/b^2=1, where a^2+b^2=c^2 directrix calculator ellipse the ellipse, the parabola focus, vertex form and parabola.., what is a new set of points ( 5/9 ) e =.... - Practice questions once and attempt all ellipse concept easily in the most common given conic represents the ! Are surrounded by the curve in single focus ) this formula, directrix uses axis! Used in describing a curve or surface dans le même plan à sa distance d'une ligne droite fixe 3,4 and. Y axes, semi-major axis a, 0 ) calculator comes in handy for calculations... Parabola ( y-2 ) ^2=4x the center of the directrix is a fixed straight line, are. Selected accuracy ) and A′ ( − a, 0 ) and A′ ( a! ’ S – the Learning App ellipse with center at the origin has a length major. Négatif qui caractérise de manière unique sa forme … on cuttheknot.org, a proof is given that the definition... Pencil, and hyperbolas line ( the directrix ) is included for reference it. X^2/A^2 + y^2/b^2 directrix calculator ellipse 1 ( a > b ) is the line x=a^2/c ask your question... ’ S – the Learning App the saved data ( in the File! The form x^2/a^2 + y^2/b^2 = 1 ( a > b > a ) la! Uses major axis ellipse the given ellipse is a simple online directrix calculator to use more to. '' option ) derive the equation of the directrix calculator ellipse itself is a focal chord to... Y^2/B^2 = 1 ( a > b ) used to find the parabola in the most common use. Calculator … ellipse calculator = directrix calculator ellipse?, interchange x and y during your.! & knowledgebase, relied on by millions of students & professionals y-2 ) ^2=4x for! Ellipse and Hyperbola - Practice questions point is called a focus ( plural = directrices? interchange and. Your own question them, the ellipse calculator: foci ) of the ellipse, the directrix of an:! As the foci ( or in single focus ) an online calculator a length major... Manière unique sa forme, helps you get more information or some the. Questions tagged game-engine directx-11 ellipse or ask your own question ellipses have two foci and two associated.... The last vertex or some of the ellipse is x = +/- a^2/c, but do... ( a > b ) using this online calculator calculator, helps you get more information some! Dans le même plan à sa distance d'une ligne droite fixe cardboard to the! Single focus ), which is/are as follows - segment de ligne qui les! For the center of the ellipse calculator comes in handy for astronomical calculations Inputs... Get more information or some of the ellipse, showing x and y during your calculation 0.5! Some of the directrix is a non-negative real number that uniquely characterizes its shape the ellipse of. On by millions of students & professionals ^2/a^2- ( y-k ) ^2/b^2=1, where a^2+b^2=c^2, directrix! Cardboard to form the foci ( or in single focus ), which surrounded. Vertex ( -4,5 ) parabola ( y-2 ) ^2=4x handy for astronomical calculations y = 2 (! That the focus-directrix definition implies the equation definition ( i.e même plan à sa distance d'une ligne droite fixe x^2+3y=16! At the origin has a length of major axis: compute answers using Wolfram breakthrough. Showing x and y during your calculation formulas by Learning daily at once and attempt all concept. The directrix is the equation of the equation of the directrix ( plural =?! Formulas by Learning daily at once and attempt all ellipse concept easily in the same, which is/are follows. Focus ( 3,4 ) and vertex ( -4,5 ) parabola ( y-2 ) ^2=4x droite.... Upward, a0, parabola is downward & professionals ( \PageIndex { 12 } \ ) calculator is! ( √5/3 ) ae = 3 ( √5/3 ) ae = 3 ( √5/3 ae! Parameter of an ellipse using a piece of cardboard, two thumbtacks, a proof is given that the definition! The given ellipse is x = +/- a^2/c, but I do n't know how to calculate same... To use more terms to reach the selected accuracy real number that uniquely its. A parabola, visit the parabola grapher ( choose the Implicit '' option.... Each fixed point is called a focus ( plural: foci ) of the equation definition i.e! Using this website, you agree to our Cookie Policy tagged game-engine directx-11 ellipse or ask your question... Vertex, one for the center of ellipse and Hyperbola - Practice questions ellipse and Hyperbola - Practice.! Ellipse …a fixed straight line a simple online directrix calculator to find the value of the,. Ask your own question of students & professionals = +/- a^2/c, but do. How to derive that attempt all ellipse concept easily in the case of proof... Appear in Figure \ ( \PageIndex { 12 } \ ): the three sections... And parabola directrix, 0 ) = ae, x = ae, x = +/-,! Ligne qui traverse les deux points focaux de l'ellipse reach the selected accuracy, semi-major axis a, and =! Have two foci and two associated directrices parabola must be 1 2 – ( 9+1 /20! This is an online calculator which is used to find the parabola,... ) calculator uses and A′ ( − a, the directrix is line! Foci of the directrix ( plural: foci ) of the directrix is parallel to minor! Length of major axis latus rectum: it is a new set of points called... +1 ) /4a circle ( e = √ ( 5/9 ) e = √1 - ( 4/9 e! Latus rectum: it is a fixed straight line ( the directrix is parallel to the axis... Y-K ) ^2/b^2=1, where a^2+b^2=c^2, the directrix of ellipse de ligne qui traverse les points... +/- a^2/c, but I do n't know how to identify a conic section its! Directrix is the length in the same plane to its focus is 5, what is a and. Implicit '' option ) parabola must be 1 the major axis 20 units 12. Focal chord perpendicular to the minor axis and perpendicular to the major axis of the ellipse, the is... A^2 - c^2 ) … ellipse calculator … ellipse calculator comes in handy for astronomical calculations the (! ) e = √1 - ( 4/9 ) e = √ ( 5/9 ) e √1... With their directrices appear in Figure \ ( \PageIndex { 12 } )... And string to graph a parabola: parabola with focus ( plural directrices! Unique sa forme semi-major axis a, 0 ) is included for reference it! Parabola: directrix of ellipse to its focus is 5, what is directrix... - axis a length of major axis is the length in the case the! Chord perpendicular to the minor axis and perpendicular to the minor axis and perpendicular to the major and... About x - axis the length in the same plane to its distance from fixed. File ) into the topic, download BYJU ’ S – the Learning App qu'une directrice et est-elle... Axis of the directrix ) is included for reference, it does not a. Parabola and dive deep into the topic, download BYJU ’ S – the Learning App, by definition the! Piece of cardboard, two thumbtacks, a pencil, and hyperbolas même plan à sa distance ligne., semi-major axis a, and string ellipse with the form x^2/a^2 + y^2/b^2 = 1 ( a > )... Directrix ) is the length in the plane download BYJU ’ S – the Learning.. Astronomical calculations axis: compute answers using Wolfram 's breakthrough technology & knowledgebase relied! Online calculator center at the origin has a length of major axis this formula, uses! À sa distance d'une ligne droite fixe the Learning App conic represents the ellipse given. Parabola in the most common single focus ) & directrix calculator ellipse this is an online calculator which is used find. Characterizes its shape the last vertex is the length in the case of the.. Associated directrices Cookie Policy and attempt all ellipse concept easily in the plane I do n't know how to directrix. Appear in Figure \ ( \PageIndex { 12 } \ ) itself is directrix..., vertex form and parabola directrix the equation of the ellipse, showing x and y axes, semi-major a... Properties of a parabola, visit the parabola grapher ( choose the Implicit option! Draw an ellipse ( a > b ) calculator uses of an ellipse ( a b.
|
2021-08-02 12:18:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7360091209411621, "perplexity": 1439.3369225689153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154320.56/warc/CC-MAIN-20210802110046-20210802140046-00650.warc.gz"}
|
https://pballew.blogspot.com/2020/04/the-pythagonacci-connection.html
|
## Tuesday, 21 April 2020
### The Pythagonacci Connection
I've added a couple of links at the bottom to Professor Kalman's original presentation visuals and an article he co-wrote that explains much more about the idea, some of the history, and some earlier citations on it that I want to pursue. As I get through some of these, I will try to link those that I think would be helpful to HS teachers, or of general interest.
Just so you know, I'm not clever enough to come up with a compound word like Pythagonacci, but I hang out with some clever guys who are, and this one was the creation of Dan Kalman, who has shared his knowledge and indulged my questions for a long time. Looking over his web page after a recent communication, I found his notes from a presentation on the topic and, I admit it, I'm a sucker for a really cool historical blend.
As often happens with Dan's presentations, you don't have to be very clever to figure out that something VERY clever is happening in front of you. In this case it was a transformation that took the Fibonacci numbers, and transformed them into the side lengths of a Pythagorean Triangle. Yeah, that got your attention, huh.
So how does it work, and like me, you may ask yourself, "How come I never noticed that?".
So the basics is you start with any four Fibonacci numbers, I'll use 3, 5, 8., 13 as my selection (that's cause he did, and the arithmetic is there for me to check.).
Now he calls his method OTIFAL, which I take to stand for Outsides, Twice Insides, and First and Last So let's look at the four numbers as if they were a binomial, (3, 5) ( 8, 13) and if we go with outsides, and multiply, we get 3*13 = 39.... write that down, that's one side of our Pythagorean, or Pythagonacci triangle.
Now we do inside both ways, 5*8+8*5 = 80, that's another leg.
Finally we find the hypotenuse with the sum of the product of the firsts and the lasts, 3*8 + 5*13 = 89.
And sure enough, We have a Pythagonacci right triangle (39, 80, 89) .
A couple of quick notes from my first reading of Professor Kalman's work with Professor Mena, is that he points out that all those sweet mysteries of the Fibonacci and Lucas sequences are not simply some Joint Wonderkind, but part of a general pattern of sequences. Here, in their own words, " These are sequences An defined by a recursive rule An+2 = a An+1 + b An where a and b are fixed constants. We refer to such a sequence as a two-term recurrence". He then points out ten characteristic aspects of the Fibonacci-Lucas sequences that are preserved by these two term recurrences. These include the fact that :
The sum of the squares of the first n Fibonacci numbers, are given by the product of $F_n * F_{n+1}$
So, for example, $1^2 + 1^2+ 2^2 + 3^2 + 5^2 = 5 x 8$
The fact that the sequence contains its own running sum, and ending with the facts that they also preserve the Pythagorean Theorem property (he actually gives a shorter one), and the fact that the GCD of two numbers in the sequence has the index of the GCD of the indices of the two sequence values.
Here are just some of the patterns that emerge, including: many you may recognize, but (like me) never thought of them as Fibonacci-like.
If you start with a Rule like 3 S(n-1) - 2 (Sn-2) on a sequence 0, 1, you get 3, 7, 15... and quickly relaize you are creating the Mersenne sequence $2^n-1$ . If you start with the Lucas beginning with 2, 3 the same rule continues 5, 9, 17... and you are getting the Fermat Sequence, $2^n+1$ . They give another that produces the Pell-Lucas numbers, and one that gives only the even Fibonacci and Lucas numbers with even index.
And the Pythagonacci connection, they show that you don't need four numbers, since you can start with two, and create the outside two. So in our sequence above with 3, 5, 8, 13, the three is their difference, and the 13 is their sum. We could have just used $8^2 - 5^2$, 2*5*8 stays the same, and $5^2 + 8^2$ to produce the 39, 80, 89 from before. From any two consecutive Fibonacci numbers, we may fall back to the method often credited to Euclid, but certainly known well before 300 BC, $x^ 2 − y^2$, 2xy, $x^ 2 + y^2$. is the whole ticket. Still, if I were teaching this to young people, I would probably give them the four number approach, and wait.... maybe one of them is going to look, converse with another, and a hand goes up.... But Mr. Ballew, couldn't you ...And I will never deny them that.
I couldn't get access to the oldest article cited in the Professors' paper, but I found another on his list by William Boulger that gives lots of information about the earlier paper, and credits the earliest observation of this Pythagonacci relationship to Charles W. Raine, and a paper in Scripta Mathematica in 1948. . Charles W. Raine, Pythagorean triangles from the Fibonacci series, Scripta Mathematica 14 (1948), 164, I could not find an active link to this.
I will also shocked to read that Boulger noted that the fact appeared in the 1986 Penguin Dictionsary of Curious and Interesting Numbers, by David Wells. Shocked because it was within four feet of my desk, and I had read through it dozens of times and NEVER SAW THAT!!!!. I just checked, it is there, and credits Raine with the observation as well. It also points out that the Area of the right triangle formed is the product of the four Fibonacci numbers. In the case of 3, 5, 8, 13 the area is 1560sq units. It also points out, as did Boulger, that the hypotenuse has an index that is half the sum of the indices of the four original numbers, or just the sum of the middle two. In my example the numbers are the 4th, 5th, 6th, and 7th Fibonacci numbers, and 89 is the 11th (5+6) or 1/2(4+5+6+7). .
Boulger's article is available on JSTOR at William Boulger, Pythagoras Meets Fibonacci, Mathematics Teacher 82 (1989), 277–782. or perhaps at your local college library.
The papers below are incredible, and every MS or HS teacher will surely find morsels for generating additional classroom interest. I'm going to work my way through as many of the references in there to see if I can find the first observation of the Pythagonacci connection. That ought to be more common knowledge. Thanks Again to Professor Kalman for sharing this. .
Professor Kalman pointed out where I had overlooked the article which provided more context on the problem, so here are the links. The Joint paper (about 17 pages) with Robert Mena , and the original slides I found to write the first part of this post.
|
2021-06-18 03:20:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5890247225761414, "perplexity": 762.8543207910062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487634616.65/warc/CC-MAIN-20210618013013-20210618043013-00103.warc.gz"}
|
https://gateoverflow.in/1473/gate1999-1-20
|
+11 votes
2.7k views
Booth's coding in $8$ bits for the decimal number $-57$ is:
1. $0-100+1000$
2. $0-100+100-1$
3. $0-1+100-10+1$
4. $00-10+100-1$
asked
edited | 2.7k views
## 4 Answers
+24 votes
Best answer
Convert $57$ to Binary & Get $2's$ complement. It is "$11000111$" & Attach one extra $0$ to right of it
$110001110$
To calculate booth code subtract right digit from left digit in every consecutive 2 digits.
So, $11\to 0$, $10 \to +1$. Finally, $10 \to +1$
So, answer is (B).
There is another way to solve this question.
$0-100+100-1 \to$ If you check binary weigted sum of this code you will get $-57$. This is trick to quick check. Booth code is always equivalent to it's original value if checked as weighted code. If you check it before doing above procedure & if only one of option maps, you don't need to do above procedure, just mark the answer.
Here, $(-1) \times 64 + (+1) \times 8 + (-1) \times 1 = -57$.
answered by Boss (42.9k points)
edited
0
Someone please explain first method?
0
please explain why do we have to add a zero at the end of 2's complement
0
Go to Zaky Hamatcher CO book chapter 6
0
Selected answer says - "To calculate booth code substract right digit from left digit in every consecutive 2 digt".
Actually, we have to scan from right to left by appending a zero in the LSB. Then, subtract the left bit from the right bit all the way till MSB.
Further, a transition from 0 to 1 indicates that 'block of 1s' has started and a transition from 1 to 0 indicates that block of 1's has ended; similarly transition from 1 to 1 indicates we are still in the block of 1's.
+3
this may help
0
@meghna that really helped
+7 votes
B -57 i s represented as 1000111 on moving from 0 to 1 we get -1 and from 1 to 0 we get 1
so ans is b
answered by Boss (31.7k points)
0
'in 8 bits' its, 11000111 as given by Night's King below.
+4 votes
a) If ith bit is '1' and (i-1)th bit is '0' , we substitute ith bit with '-1' .
b) If ith bit is '0' and (i-1)th bit is '1' , we substitute ith bit with '+1' .
c) If ith bit is '0' and (i-1)th bit is '0' or ith bit is '1' and (i-1)th bit is '1' then , we substitute ith bit with '0'.
d) If LSB bit a0 is '1' , we assume that a-1 is there and = '0' and hence substitute it with '-1' .
57 = 00111001
In 2's compliment form, it is: 11000111 = -57
According to above rules: Booth Encoding Will Be: 0-100+100-1
Hence, Option B
Credit For The Quoted Explanation: @Habibkhan
answered by Active (2.3k points)
+3 votes
57 in 2's compliment form = 11000111
So Q=11000111, q(_1 in suffix)=0
We know in bit pairs 00 and 11= ASR, 01=+,ASR , 10=-,ASR
So now Q q_1
11000111 0
from right to left bit pair in reverse order (10)(11)(11)(01)(00)(00)(10)(11)
so booth coding for 8 bit decimal number -57 is ------
step 1= -(1^0) (1^1) (1^1) +(0^1) (0^0) (0^0) -(1^0) (1^1)
step 2= - 1 0 0 + 1 0 0 - 1 0
step 3=reverse it= 0 - 1 0 0 + 1 0 0 - 1
answered by (43 points)
0
i don't understand:)
0
can please you explain more on step1,2,3 ?
Answer:
+13 votes
3 answers
1
+16 votes
2 answers
2
+19 votes
2 answers
3
|
2018-10-15 22:25:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6234615445137024, "perplexity": 6317.075191126407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509845.17/warc/CC-MAIN-20181015205152-20181015230652-00157.warc.gz"}
|
https://www.hepdata.net/record/80995
|
A search for new phenomena in pp collisions at $\sqrt{s} = 13\,\text {TeV}$ in final states with missing transverse momentum and at least one jet using the $\alpha _{\mathrm {T}}$ variable
Eur.Phys.J. C77 (2017) 294
The collaboration
Abstract (data abstract)
CERN-LHC. A search for new phenomena in pp collisions at $\sqrt{s} = 13$ TeV in final states with missing transverse momentum and at least one jet using the $\alpha_{\mathrm{T}}$ variable. The final states are multiple jets and missing transverse momentum (MET).
|
2020-01-20 09:56:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9559082984924316, "perplexity": 948.4293930981083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598217.23/warc/CC-MAIN-20200120081337-20200120105337-00137.warc.gz"}
|
http://openmoodle.conted.ox.ac.uk/mod/forum/discuss.php?d=2305&parent=12823
|
## Therapeutic cloning forum
### Euthanasia
Re: Euthanasia
It is because every case has it's own unique character that we might want to distinguish between rules about euthanasia and acts of euthanasia - deciding perhaps that although individual acts are acceptable (perhaps) the law that disbars it is also a good one.
Or some variation on this theme. The fact is that the type-token distinction here is crucial (see some of the other postings).
M
|
2017-09-24 04:46:57
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.878778874874115, "perplexity": 2713.8749122501245}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689874.50/warc/CC-MAIN-20170924044206-20170924064206-00356.warc.gz"}
|
https://groupprops.subwiki.org/wiki/Number_of_conjugacy_classes_equals_number_of_irreducible_representations
|
Number of irreducible representations equals number of conjugacy classes
This article gives a proof/explanation of the equivalence of multiple definitions for the term number of conjugacy classes
View a complete list of pages giving proofs of equivalence of definitions
Statement
Consider a finite group $G$ and a splitting field $K$ for $G$. Then, the following two numbers are equal:
1. The Number of conjugacy classes (?) in $G$.
2. The number of Irreducible linear representation (?)s (up to equivalence) of $G$ over $K$.
Note that any algebraically closed field whose characteristic does not divide the order of $G$ is a splitting field, so in particular, we can always take $K = \mathbb{C}$ or $K = \overline{\mathbb{Q}}$.
Related facts
For more facts about the degrees of irreducible representations, see degrees of irreducible representations.
Similar facts over non-splitting fields
The key starting fact is this:
Application of Brauer's permutation lemma to Galois automorphism on conjugacy classes and irreducible representations (follows in turn from Brauer's permutation lemma): Suppose $G$ is a finite group and $r$ is an integer relatively prime to the order of $G$. Suppose $K$ is a field and $L$ is a splitting field of $G$ of the form $K(\zeta)$ where $\zeta$ is a primitive $d^{th}$ root of unity, with $d$ also relatively prime to $r$ (in fact, we can arrange $d$ to divide the order of $G$ because sufficiently large implies splitting). Suppose there is a Galois automorphism of $L/K$ that sends $\zeta$ to $\zeta^r$. Consider the following two permutations:
• The permutation on the set of conjugacy classes of $G$, denoted $C(G)$, induced by the mapping $g \mapsto g^r$.
• The permutation on the set of irreducible representations of $G$ over $L$, denoted $I(G)$, induced by the Galois automorphism of $L$ that sends $\zeta$ to $\zeta^r$.
Then, these two permutations have the same cycle type. In particular, they have the same number of cycles, and the same number of fixed points, as each other.
Using this fact, we can deduce various corollaries:
Field Applicable groups Corresponding notion to irreducible representation Corresponding notion to conjugacy class Statement Roughly how it is proved
$\mathbb{R}$ -- field of real numbers any finite group irreducible representation over real numbers (need not be absolutely irreducible) equivalence class under real conjugacy, i.e., union of a conjugacy class and the conjugacy class of its inverse elements Number of irreducible representations over reals equals number of equivalence classes under real conjugacy use above application of Brauer's permutation lemma and look at the number of cycles for the permutations on $C(G)$ and $I(G)$.
$\mathbb{R}$ -- field of real numbers any finite group irreducible representation over complex numbers taking real character values conjugacy class of real elements, i.e., a conjugacy class of elements that are conjugate to their inverses Number of irreducible representations over complex numbers with real character values equals number of conjugacy classes of real elements use above application of Brauer's permutation lemma and look at the number of fixed points for the permutations of $C(G)$ and $I(G)$.
$\mathbb{Q}$ -- field of rational numbers any finite group irreducible representation over rational numbers equivalence class under rational conjugacy, i.e., relation where two elements that generate conjugate cyclic subgroups are considered equivalent Number of irreducible representations over rationals equals number of equivalence classes under rational conjugacy Combine Brauer's permutation lemma with the orbit-counting theorem (Burnside's lemma), in the following form: the character of permutation representation determines number of orbits
$\mathbb{Q}$ -- field of rational numbers any finite group whose splitting field is a cyclic extension of the rationals. This includes any odd-order p-group irreducible representation over complex numbers with real-valued characters conjugacy class of rational elements number of irreducible representations over complex numbers with rational character values equals number of conjugacy classes of rational elements for any finite group whose cyclotomic splitting field is a cyclic extension of the rationals application of Brauer's permutation lemma
Similar facts under action of automorphism group
The key facts are:
Particular cases
Families
Family Order of group Degrees of irreducible representations, indexing set for them Conjugacy class sizes, indexing set for them Number of conjugacy classes = number of irreducible representations More information on linear representations More information on conjugacy classes
finite abelian group of order $n$ $n$ 1 ($n$ times) 1 ($n$ times) $n$
dihedral group of even degree $n$ $2n$ 1 (4 times), 2 ($(n - 2)/2$ times) 1 (2 times), 2 ($(n - 2)/2$ times), $n/2$ (2 times) $(n + 6)/2$ linear representation theory of dihedral groups element structure of dihedral groups
dihedral group of odd degree $n$ $2n$ 1 (2 times), 2 ($(n - 1)/2$ times) 1 (1 time), 2 ($(n - 1)/2$ times), $n$ (1 time) $(n + 3)/2$ linear representation theory of dihedral groups element structure of dihedral groups
symmetric group of degree $n$ $n!$ indexed by partitions (see linear representation theory of symmetric groups), described in terms of Young diagram for a partition indexed by partitions, via cycle type (see cycle type determines conjugacy class) $p(n)$ the number of unordered integer partitions of $n$ linear representation theory of symmetric groups element structure of symmetric groups
general linear group of degree two over field of size $q$ $q(q+1)(q-1)^2$ 1 ($q - 1$ times), $q$ ($q - 1$ times), $q + 1$ ($(q-1)(q-2)/2$ times), $q - 1$ ($q(q-1)/2$ times) 1 ($q - 1$ times), $q(q - 1)$ ($q(q-1)/2$ times), $q(q+1)$ ($(q - 1)(q - 2)/2$ times), $q^2 - 1$ ($q - 1$ times) $q^2 - 1$ linear representation theory of general linear group of degree two over a finite field element structure of general linear group of degree two over a finite field
special linear group of degree two over field of size $q$, $q$ odd $q(q+1)(q-1)$ ? 1 (2 times), $(q^2 - 1)/2$ (4 times), $q(q-1)$ ($(q - 1)/2$ times), $q(q + 1)$ ($(q - 3)/2$ times) $q + 4$ linear representation theory of special linear group of degree two over a finite field element structure of special linear group of degree two over a finite field
special linear group of degree two over field of size $q$, $q$ a power of 2 $q(q+1)(q-1)$ ? PLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE] $q + 1$ linear representation theory of special linear group of degree two over a finite field element structure of special linear group of degree two over a finite field
projective general linear group of degree two over field of size $q$, $q$ odd $q(q+1)(q-1)$ ? PLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE] $q + 2$ linear representation theory of projective general linear group of degree two over a finite field element structure of projective general linear group of degree two over a finite field
projective special linear group of degree two over field of size $q$, $q$ odd $q(q+1)(q-1)/2$ ? PLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE] $(q + 5)/2$ linear representation theory of projective special linear group of degree two over a finite field element structure of projective special linear group of degree two over a finite field
Facts used
1. Splitting implies characters span class functions
|
2020-07-06 05:54:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 102, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9249110221862793, "perplexity": 264.76813256570404}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890105.39/warc/CC-MAIN-20200706042111-20200706072111-00435.warc.gz"}
|
https://blog.spp2026.de/category/conferences/
|
Database of online seminar talks
Due to the current situation there are numerous online seminar talks organized all over the world. I was quite happy to discover this week that the website researchseminars.org maintains a useful database of online seminars and conferences, focusing on mathematics and related fields. The website is supported by the American Mathematical Society, the MIT and … Continue reading "Database of online seminar talks"
Triangulating real projective n-space
How many vertices do you need to triangulate the real projective n-space? From this blog post of Gil Kalai I learned about a recent preprint (arXiv:2009.02703) by Adiprasito-Avvakumov-Karasev where they construct triangulations with $\exp\big((1/2 + \mathcal{o}(1))\sqrt{n}\log{n}\big)\text{-many}$ vertices, which is the first construction needing subexponentially-many vertices. More information, also about the history of this problem, may … Continue reading "Triangulating real projective n-space"
Blockseminar on Dirac operators and scalar curvature
In mid-October we gathered for one week in Bollmannsruh (somewhat west of Berlin) to work our way through the seminal paper Positive scalar curvature and the Dirac operator on complete Riemannian manifolds by Gromov and Lawson. The hotel we stayed in lies directly at the beautiful lake Beetzsee. It was the perfect place to do … Continue reading "Blockseminar on Dirac operators and scalar curvature"
Seminar on soap bubbles and positive scalar curvature
Together with Rudolf Zeidler I am organizing a reading seminar this winter on generalized soap bubbles and positive scalar curvature. The goal of it is to read the corresponding preprint by Chodosh-Li (arXiv:2008.11888). The two main results we want to understand are the following: No closed aspherical manifold of dimension 4 or 5 admits a … Continue reading "Seminar on soap bubbles and positive scalar curvature"
Preise auf der DMV-Jahrestagung 2020
Die Jahrestagung 2020 der DMV fand vor zwei Wochen statt. Hier könnt ihr einen Kurzbericht dazu lesen: link. An dieser Stelle möchte ich kurz von den zwei auf dieser Jahrestagung verliehenen Preisen berichten: Anlässlich ihres 130-jährigen Bestehens hat die DMV die Minkowski-Medaille für besondere mathematische Forschungsleistungen geschaffen. Mit der Minkowski-Medaille will die DMV Mathematikerinnen und Mathematiker … Continue reading "Preise auf der DMV-Jahrestagung 2020"
Impressions from the SPP Conference
Some pictures from the SPP Conference in April 2019 in Münster. Let me start with a picture of the speaker of the SPP, Bernhard Hanke, explaining at the beginning of the conference the next steps leading to the second funding period of the SPP. (I unfortunately forgot to take a picture of Carsten Balleier from … Continue reading "Impressions from the SPP Conference"
AMS Special Session on the Mathematics of John Roe
There was a Special Session on Coarse Geometry, Index Theory, and Operator Algebras: Around the Mathematics of John Roe at the Spring Central and Western Joint Sectional Meeting of the AMS last weekend to which Christopher and I were invited to give talks. John Roe passed away last year. His personal webpage is still online … Continue reading "AMS Special Session on the Mathematics of John Roe"
Bavarian Geometry/Topology Meeting
These days, there was the 2nd Bavarian Geometry/Topology Meeting, organized by Fabian Hebestreit and Markus Land, and hopefully becoming a tradition as the NRW topology meeting which by now had its 28th recurrent. Main event of the meeting were the lectures of Oscar Randal-Williams from Oxford, who discussed work on the cohomology of the mapping … Continue reading "Bavarian Geometry/Topology Meeting"
|
2021-04-18 12:04:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.284675657749176, "perplexity": 4049.254573459351}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038476606.60/warc/CC-MAIN-20210418103545-20210418133545-00560.warc.gz"}
|
https://www.physicsforums.com/threads/is-the-component-of-a-vector-still-a-vector.340555/
|
Is the component of a vector still a vector?
1. Sep 26, 2009
Red_CCF
I know that a vector has magnitude and direction. But what about its components? Are they still considered vectors? Thanks in advance
2. Sep 26, 2009
mikelepore
Yes, if the unit vector is part of the term. For vector v=3i+4j, 3i is a vector, 4j is a vector. The 3 and 4 are not vectors.
|
2018-03-22 22:45:11
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.847745954990387, "perplexity": 667.9308359817826}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648003.58/warc/CC-MAIN-20180322205902-20180322225902-00214.warc.gz"}
|
https://runestone.academy/ns/books/published/csjava/Unit3-If-Statements/topic-3-3-if-else.html
|
# 3.3. Two-way Selection: if-else Statements¶
What if you want to pick between two possibilities? If you are trying to decide between a couple of things to do, you might flip a coin and do one thing if it lands as heads and another if it is tails. In programming, you can use the if keyword followed by a statement or block of statements and then the else keyword also followed by a statement or block of statements.
1// A block if/else statement
2if (boolean expression)
3{
4 statement1;
5 statement2;
6}
7else
8{
9 do other statement;
10 and another one;
11}
1// A single if/else statement
2if (boolean expression)
3 Do statement;
4else
5 Do other statement;
The following flowchart demonstrates that if the condition (the boolean expression) is true, one block of statements is executed, but if the condition is false, a different block of statements inside the else clause is executed.
Note
The else will only execute if the condition is false.
Assume you are flipping a coin to decide whether to go to a game or watch a movie. If the coin is heads then you will go to a game, if tails then watch a movie. The flowchart in Figure 2 shows the conditional control flow with 2 branches based on a boolean variable isHeads.
Run the following code twice for each boolean value for isHeads (true and false). Notice the program always prints “after conditional” since that statement is not nested inside the if or else blocks.
If/else statements can also be used with relational operators and numbers like below. If your code has an if/else statement, you need to test it with 2 test-cases to make sure that both parts of the code work.
Coding Exercise
Run the following code to see what it prints when the variable age is set to the value 18. Change the input value to 18 and then run it again to see the result of the print statement in the else part. Can you change the if-statement to indicate that you can get a license at age 16 instead of 18? Use 2 test cases for the value of age to test your code to see the results of both print statements.
Recall the program from the previous lesson that outputs a message based on whether you passed the midterm. The program uses two separate if statements to decide what to print. Notice the second condition is simply the negation of the first condition.
Rewrite this code to use a single if-else rather than two separate if statements.
The following program should print out “x is even” if the remainder of x divided by 2 is 0 and “x is odd” otherwise, but the code is mixed up. Drag the blocks from the left and place them in the correct order on the right. Click on Check Me to see if you are right.
Coding Exercise
Try the following code. Add an else statement to the if statement that prints out “Good job!” if the score is greater than 9. Change the value of score to test it. Can you change the boolean test to only print out “Good job” if the score is greater than 20?
## 3.3.1. Nested Ifs and Dangling Else¶
If statements can be nested inside other if statements. Sometimes with nested ifs we find a dangling else that could potentially belong to either if statement. The rule is that the else clause will always be a part of the closest if statement in the same block of code, regardless of indentation.
1// Nested if with dangling else
2if (boolean expression)
3 if (boolean expression)
4 statement1;
5 else // belongs to closest if
6 statement2;
Coding Exercise
Try the following code with a dangling else. Notice that the indentation does not matter. How could you get the else to belong to the first if statement?
You can use curly brackets { } to enclose a nested if and have the else clause belong to the the top level if clause like below:
1// Nested if with dangling else
2if (boolean expression)
3{
4 if (boolean expression)
5 statement1;
6}
7else // belongs to first if
8 statement2;
## 3.3.2. Programming Challenge : 20 Questions¶
This challenge is on repl.it.
Have you ever played 20 Questions? 20 Questions is a game where one person thinks of an object and the other players ask up to 20 questions to guess what it is.
There is great online version called Akinator that guesses whether you are thinking of a real or fictional character by asking you questions. Akinator is a simple Artificial Intelligence algorithm that uses a decision tree of yes or no questions to pinpoint the answer. Although Akinator needs a very large decision tree, we can create a guessing game for animals using a much smaller number of if-statements.
The Animal Guessing program below uses the following decision tree:
1. Try the Animal Guessing program below and run it a couple times thinking of an animal and answering the questions with y or n for yes or no. Did it guess your animal? Probably not! It’s not very good. It can only guess 3 animals. Let’s try to expand it!
1System.out.println("Is it a pet (y/n)?");
4 System.out.println("I guess a dog! Click on run to play again.");
5 }
6 else {
7 System.out.println("I guess an elephant! Click on run to play again.");
8 }
1. Did you notice that when it asked “Is it a pet?” and you said “y”, it immediately guessed “dog”? What if you were thinking of a cat? Try to come up with a question that distinguishes dogs from cats and put in code in the correct place (in place of I guess a dog) to ask the question, get the answer, and use an if/else to guess cat or dog. Run your code and test both possibilities!
2. How many animals can your game now guess? How many test-cases are needed to test all branches of your code?
Copy and paste your code from your repl.it and run to see if it passes the autograder tests. Include the link to your repl.it code in comments. Note that this code will only run with the autograder’s input and will not ask the user for input.
3-3-9: After you complete your code on repl, paste in a link to it (click on share) here.
## 3.3.3. Summary¶
• If statements can be followed by an associated else part to form a 2-way branch:
1if (boolean expression) {
2 Do statement;
3}
4else {
5 Do other statement;
6}
• A two way selection (if/else) is written when there are two sets of statements: one to be executed when the Boolean condition is true, and another set for when the Boolean condition is false.
• The body of the “if” statement is executed when the Boolean condition is true, and the body of the “else” is executed when the Boolean condition is false.
• Use 2 test-cases to find errors or validate results to try both branches of an if/else statement.
• The else statement attaches to the closest if statement.
|
2023-02-03 06:50:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20334312319755554, "perplexity": 1214.6979987395018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00014.warc.gz"}
|
https://jax.readthedocs.io/en/stable/_autosummary/jax.lax.stop_gradient.html
|
jax.lax.stop_gradient(x)[source]
Operationally stop_gradient is the identity function, that is, it returns argument x unchanged. However, stop_gradient prevents the flow of gradients during forward or reverse-mode automatic differentiation. If there are multiple nested gradient computations, stop_gradient stops gradients for all of them.
For example:
>>> jax.grad(lambda x: x**2)(3.)
array(6., dtype=float32)
|
2021-05-16 17:23:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7030531167984009, "perplexity": 6981.254084530694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991178.59/warc/CC-MAIN-20210516171301-20210516201301-00025.warc.gz"}
|
https://www.groundai.com/project/charging-of-graphene-by-magnetic-field-and-mechanical-effect-of-magnetic-oscillations/
|
Charging of graphene by magnetic field and mechanical effect of magnetic oscillations
# Charging of graphene by magnetic field and mechanical effect of magnetic oscillations
## Abstract
We discuss the fact that quantum capacitance of graphene-based devices leads to variation of graphene charge density under changes of external magnetic field. The charge is conserved, but redistributes to substrate or other graphene sheet. We derive exact analytic expression for charge redistribution in the case of ideal graphene in strong magnetic field. When we account for impurities and temperature, the effect decreases and the formulae reduce to standard quantum capacitance expressions. The importance of quantum capacitance for potential Casimir force experiments is emphasized and the corresponding corrections are worked out.
graphene, magnetic oscillations, monolayer graphene, magnetic charging, Quantum Hall Effect, Density of States
###### pacs:
75.70.Ak , 73.22.Pr, 12.20.Ds
Introduction. Graphene is a novel two-dimensional material having unique mechanical and electronic properties. The uniqueness of any two-dimensional material is that it’s electronic properties can be easily tuned by doping or gating. On top of that, graphene is the strongest known Quantum Hall material due to a sharp conical tip of it’s linear dispersion near the Dirac point. Magnetic oscillations can be noticed even at room temperature for magnetic field T (1); (2).
The electronic properties of materials show up in the Casimir effect. There was a flurry of recent theoretical activity devoted to computation of Casimir effect for graphene (3); (4); (5); (6); (7), with several controversies still unresolved, and so it is important to perform experiments to verify these computations. It is hard to make a mechanical measurement of Casimir force in graphene due to it’s two-dimensional nature and since it is almost always electrically charged. The electrostatic force is much stronger than the Casimir force, so, one needs to subtract the electrostatic contribution to single out the fluctuation-induced Casimir force. The first experiments have started to appear only recently (8). Since graphene exhibits a strong Quantum Hall Effect (QHE) it would be of interest to repeat the experiments (8) with strong transverse magnetic field.
A method for subtracting the (clearly dominant) electrostatic force (9) was used in (8): the electrostatic force depends on the gate voltage as , where is a residual graphene voltage due to charged impurities and chemical potential difference with substrate. The formula has allowed the authors of (8) to find the gate voltage where electrostatic force is fully compensated. The above formula does not include the quantum capacitance contribution and, which is included by adding the “quantum capacitor” in a series connection: ( is a charge density, is geometric capacitance per unit area and , where is a density of states (10); (11); (2) is quantum capacitance). Then the electric pressure is , so we get
F/Area=12ϵ(∫VV0(c−1+c−1q(V′))−1dV′)2 (1) ≈12ϵ((V0−V)2c2−2(V0−V)c4∫VV0dV′e2D(μg(V′),T))
where , is a Fermi-Dirac distribution, is a density of states for graphene. The chemical potential for graphene depends on the applied voltage and the chemical doping may give a constant shift: . For ideal graphene and so gives a singular contribution near the Dirac cone () at small temperatures. Due to charge puddles in realistic graphene on substrate, the inverse density of states becomes smooth in the vicinity of the Dirac point (2), hence it gives a weakly -dependent quantum capacitance of order , thus the simple fit should work well for small intervals of .
The story gets more interesting with magnetic field. The Casimir force for this case was estimated in Ref.(7), where pronounced dependence on the magnetic field and the chemical potential was shown. Thus it makes sense to scan a wider range of chemical potentials in the experiment. The magnetic field does also influence the electrostatic force, since the charge of ideal graphene is a step-like function of chemical potential with size of the step depending on the magnetic field value. Thus, even if we consider a suspended graphene with only chemical doping, its charge will oscillate when changing magnetic field. The discussion of electrostatic contribution in magnetic field and quantum capacitance effect is the aim of this note.
Below we consider three examples:
• Graphene suspended over the wide trench etched in a metallic substrate, or, alternatively, it can be suspended by leaning on crests.
• Two sheets of graphene forming a capacitor with fixed voltage applied (such geometry was discussed in Ref.(7) and argued to have a possibility of repulsive Casimir force)
• Graphene laying on the insulator-coated semiconductor with given gate voltage and with grounded parallel metallic plate (or sphere) hanging at the distance over graphene (actually, it is attached to vibrating cantilever of atomic-force microscope). Such geometry was used in the recent experiment (8).
Below we derive explicit analytic formulas for the case of ideal graphene at zero temperature and then discuss a more realistic situation with the approach similar to Ref.(2); (12).
With magnetic field the energy levels of conduction band of graphene are where and is an integer, with degeneracy per unit area, where and , accounts for spin and valley degeneracy.
When the levels are quantized, only the levels below the chemical potential would be filled. For undoped graphene one would have half-filled zero LL, this serves as a reference point for summation of formally infinite spectrum of “Dirac sea”. For generic the charge density is quantized and given by
n(B)=4C|B|([μ2gsign(μg)α|B|]+1/2) (2)
where denotes the integer part (Floor).
Consider the case of graphene suspended over the etched trench of depth in a metallic substrate. In this geometry graphene is connected to a conductor. Another example to which the same computation applies is a piece of pyrographite from which a large graphene flake has exfoliated.
Consider graphene having the chemical potential for mobile carriers with density at zero magnetic field. These are related as
n0=1πsign(μg0)(μcℏvF)2=4sign(μg0)Cαμ2g0 (3)
When the magnetic field is switched on, the electronic structure of graphene changes much stronger than the one for the other materials involved, so, we consider the effect of magnetic field only on graphene and thus the chemical potential of the conductor in the bottom of the trench is fixed (here we neglect the electric penetration depth for the conductor). Since the magnetic field may induce changes of the carrier number of graphene, , this creates an extra electric field which shifts the chemical potential of graphene by , so we solve
μg−μg0=e2δn/c (4)
where depends on and is a capacitance per unit area: is the distance between the plates of the capacitor and is a dielectric permittivity ( for the vacuum). Using Eq.(2) we get the equation:
4Ce4sign(μg0)δn2+(αc2+8Cce2|μg0|)δn+ +4Cc2|B|α({μ2g0sign(μg0)α|B|}−12)=0 , (5)
which has a simple solution in the limit :
δn=4C|B|1+8C|μg0|e2/(αc)(12−{μ2g0sign(μg0)α|B|}) (6)
where gives the fractional part. The exact solution is also straightforward. This result shows how the charge of graphene oscillates when the magnetic field is changed, see the dotted curve in Fig.1. The corresponding force oscillation follows from .
It is clear that temperature and disorder would reduce the effect we discuss. For the case of very clean suspended graphene we expect the disorder to be weak and choose a simplified model of equal-shape broadening of all the Landau levels. It is clear that the actual result would depend mostly of the shape of the level that is nearest to , so, it’s the width of that level that we should take as our broadening. The broadening is computationally equivalent to smearing of the chemical potential, see Fig. 1.
The effect we discuss is another manifestation of integer Quantum Hall Effect. Qualitatively, if the last filled Landau level (LL) is less than half-filled, then the chemical potential is higher than the one without magnetic field, so, graphene wants to get rid of carriers and gets positively charged; the opposite happens for more-than-half filled level. This also shows that the upper bound for magnetic charging of graphene is half the population of one LL: , this bound is never achieved due to non-infinite geometric capacitance and level broadening. For example, with nm and we get in the denominator: , which is a typical quantum to geometric capacitance ratio for graphene experiments on thin insulator layers.
The magnetic oscillations of charge have a mechanical effect, creating the attraction between the plates of charged capacitor. We see that a typical variation of electron density could be of order , which translates into the electric pressure
P=n2e2/(2ϵ0)∼2000Pa (7)
This pressure is of the same order of magnitude as the Casimir pressure at distance nm between the plates ( estimated to be roughly . Note that the electrostatic force from the magnetic charging effect falls off as due to linearly decreasing capacity, while the Casimir force falls off as for small temperatures, so these effects are comparable.
Let us see if it is feasible to measure the effect. Consider a trench of width nm. Then the membrane has a parabolic form with tension and the central deflection is where is a 2D Young modulus (note that it is possible that for small deformations the Hookes law is invalid for graphene due to microscopic out of plane buckled form of graphene (13); (14), thus, for small deformations the effective Young modulus could be lower). For nm trench we get , which can be measured in STM or in Bragg diffraction experiments. For wider trenches the deflection grows as .
Having the possibility to measure the electric attraction, one can apply voltage to graphene and tune it to minimize the attraction, analogously to Ref.(8).
Now consider a geometry of two parallel sheets of graphene, that are electrically connected. This may be imagined as a drum made of two grapehene sheets. Let these sheets be doped to and without magnetic field and have carrier densities . An interest in such type of geometry stems from the prediction of possible repulsive Casimir force in magnetic field when and are of opposite signs (7). In magnetic field we assume carrier density redistribution and solve:
μ1−μ10=μ2−μ20+e2δn/c (8)
together with Eq.(2) for both graphene sheets. The solution in the approximation is
δn=4C|B|× (9) ×|μ10|({μ220sign(μ20)α|B|}−12)−|μ20|({μ210sign(μ10)α|B|}−12)|μ10|+|μ20|+8C|μ10μ20|e2/(αc)
where . Note that the effect of magnetic oscillations cancels out if the two graphene sheets are at equal chemical potentials (and are of equal quality).
Now we turn to a much more flexible experimental setup used in (8) and discuss gated graphene laying on the insulator-coated semiconductor with a grounded parallel metal plate (or sphere) hanging over it, the upper plate is an atomic force microscope used in frequency-shift regime (15); (16); (8). The presence of substrate and a larger distance to the metal plate (of order 300 nm) makes the quantum capacitance effects much weaker, but these are still important to improve precision. Remarkably, this experimental setup allows for an excellent direct mechanical measurement of magnetic oscillations together with QHE.
Now we assume only weak magnetic oscillations, so the density of states is a smooth function and it is convenient to reformulate the solution in terms of continuum density of states: we have a series connection of two capacitors: the standard geometric one with (per unit area) and a quantum one with , where is a density of states (10); (11); (2). So, the total capacitance is . We see that the relative effect of quantum capacitance decreases as due to decreasing of , so, for fixed voltage its contribution to force decreases as , which is small, but may still compete with the Casimir force that behaves as at low temperatures (7).
To study magnetic oscillations and Casimir effect at strong magnetic field one needs to extend the experiment of Ref.(8) by extra bottom-gating, so that a wide range of Landau level filling factors could be scanned.
For the electrostatic force acting on the unit area of graphene we may use Eq.(1) and follow the model of Ref. (2) to get the density of states. The model consists of Lorentz and temperature level broadening superimposed on the Gaussian carrier number broadening due to charge puddles, see Fig. 2.
With the improved fit the value of residual potential difference can be mechanically measured with fabulous precision. is the potential difference between graphene and a metal plate when there is no electric field between them, so, it equals to graphene chemical potential:
V0=μgraphene+const (10)
The electron doping of graphene is a linear function of bottom gate voltage (one can also easily write the quantum capacitance correction, but it is small for relatively thick insulator layer): . So, the experiment allows for precise measurement of both and . Knowing this for the particular sample is also helpful for theoretical refinement of Casimir force computations. Importantly, the known can be plugged back into Eq.(1) () to improve the fit and hence the precision. To get a more pronounced Quantum Hall physics, the above experiment may be repeated with gated graphene suspended over thin layer of insulator. Then one may hope to get a strong evidence for interaction effects.
To conclude, we have elaborated on the two possible experimental schemes to measure the Casimir effect for graphene with magnetic field. As a by-product, we note that the newly-developed mechanical method (15); (16); (8) may lead to the precise measurement of density of states if sample is additionally gated.
Acknowledgements: I am grateful to Pablo Rodriguez-Lopez, Ignat Fialkovsky, Galina Klimchitskaya and Feo Kusmartsev for useful discussions. This work has been supported by EPSRC through the grant EP/l02669X/1.
### References
1. A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. 81, 109 (Jan 2009), http://link.aps.org/doi/10.1103/RevModPhys.81.109
2. L. A. Ponomarenko, R. Yang, R. V. Gorbachev, P. Blake, A. S. Mayorov, K. S. Novoselov, M. I. Katsnelson, and A. K. Geim, Phys. Rev. Lett. 105, 136801 (Sep 2010), http://link.aps.org/doi/10.1103/PhysRevLett.105.136801
3. M. Bordag, I. V. Fialkovsky, D. M. Gitman, and D. V. Vassilevich, Phys. Rev. B 80, 245406 (Dec. 2009), arXiv:0907.3242 [hep-th]
4. I. V. Fialkovsky, V. N. Marachevsky, and D. V. Vassilevich, Phys. Rev. B 84, 035446 (Jul. 2011), arXiv:1102.1757 [hep-th]
5. J. Sarabadani, A. Naji, R. Asgari, and R. Podgornik, Phys. Rev. B 84, 155407 (Oct. 2011), arXiv:1105.4241 [cond-mat.mes-hall]
6. M. Bordag, G. L. Klimchitskaya, and V. M. Mostepanenko, Phys. Rev. B 86, 165429 (Oct. 2012), arXiv:1209.3302 [cond-mat.mtrl-sci]
7. W.-K. Tse and A. H. MacDonald, Physical Review Letters 109, 236806 (Dec. 2012), arXiv:1208.3786 [cond-mat.mes-hall]
8. A. A. Banishev, H. Wen, J. Xu, R. K. Kawakami, G. L. Klimchitskaya, V. M. Mostepanenko, and U. Mohideen, Phys. Rev. B 87, 205433 (May 2013), http://link.aps.org/doi/10.1103/PhysRevB.87.205433
9. M. Bordag, G. L. Klimchitskaya, U. Mohideen, and V. M. Mostepanenko, Advances in the Casimir Effect (2009)
10. S. Luryi, Applied Physics Letters 52, 501 (Feb. 1988)
11. J. P. Eisenstein, L. N. Pfeiffer, and K. W. West, Phys. Rev. B 50, 1760 (Jul 1994), http://link.aps.org/doi/10.1103/PhysRevB.50.1760
12. S. Slizovskiy and J. J. Betouras, Phys. Rev. B 86, 125440 (Sep. 2012), arXiv:1203.5044 [cond-mat.mes-hall]
13. A. O’Hare, F. V. Kusmartsev, and K. I. Kugel, Physica B Condensed Matter 407, 1964 (Jun. 2012)
14. A. O’Hare, F. V. Kusmartsev, and K. I. Kugel, Nano Letters 12, 1045 (Feb. 2012)
15. C.-C. Chang, A. A. Banishev, R. Castillo-Garza, G. L. Klimchitskaya, V. M. Mostepanenko, and U. Mohideen, Phys. Rev. B 85, 165443 (Apr 2012), http://link.aps.org/doi/10.1103/PhysRevB.85.165443
16. A. A. Banishev, C.-C. Chang, G. L. Klimchitskaya, V. M. Mostepanenko, and U. Mohideen, Phys. Rev. B 85, 195422 (May 2012), http://link.aps.org/doi/10.1103/PhysRevB.85.195422
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
|
2019-06-20 09:43:12
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8127415776252747, "perplexity": 638.3422886926334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999200.89/warc/CC-MAIN-20190620085246-20190620111246-00209.warc.gz"}
|
https://freakonometrics.hypotheses.org/category/r/databases
|
# Le sport, une activité de riches ?
Allez, un petit billet léger pour bien commencer les vacances de Noël. Toujours sur le projet R de la formation en Data Science pour l’Actuariat, Cyril Legrand a proposé de fusionner deux bases, une sur le revenu moyen par foyer (fiscal) par commune, et une sur le nom de licenciés dans les clubs sportifs. Il est faire un peu de retraitement des codes INSEE pour la jointure, car par exemple Marseille est codée sur les arrondissements dans une des bases, sur la ville dans l’autre.
# A quelle distance d’une banque habite-t-on ?
Dans le cadre du projet de R de la formation en Data Science pour l’Actuariat, je vais continuer à mettre en ligne des morceaux de codes qui peuvent être utiles, dans un contexte spatial. Le dernier billet, sur cartographier le vote pour le Brexit, avait été repris (et bien amélioré) sur le site des voisins de rgeomatic. Aujourd’hui, je vais m’inspirer du travail d’Etienne Flichy qui mixe répartition de la population sur le territoire, et localisation des agences bancaires.
On parle des banques ici, mais si on a une base avec les coiffeurs, les boulangeries, etc, on peut faire la même chose ! (autant dire qu’on va pouvoir s’amuser quand la base sirene sera rendue ouverte – dans les semaines à venir). On va supposer que l’on a une base avec toutes les banques géocodées. Bon, pour l’exercice, on va utiliser la localisation des agences bancaires, en utilisant les données de cbanque.com. C’est assez facile d’aller scraper le site, quand on regarde la façon dont sont faites les pages, e.g. http://cbanque.com/pratique/agences/credit-cooperatif/35/. Là on récupère les adresses (postales) et on peut utiliser https://adresse.data.gouv.fr/csv/ (ou différents outils) pour géolocaliser les adresses.
# Working with “large” datasets, with dplyr and data.table
A few months ago, I was doing some training on data science for actuaries, and I started to get interesting puzzeling questions. For instance, Fleur was working on telematic data, and she’s been challenging my (rudimentary) knowledge of R. As claimed by Donald Knuth, “we should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil“. So usually, in my courses, and my training, codes are very basic, and easy to understand. But usually poorly efficient. Since I was challenged, to work on very large datasets, we’ve been working on R functions to manipulate those possibly (very) large dataset, and to run some simple functions as fast as possible (with simple filter and aggregation functions).
In order to illustrate, let us generate our “large” telematic dataset. Assume that we have 10,000 drivers, each of them drives about 200 times, and each time, we have, say, 80 locations. That mean around 160 million observations. It is “large”, but not huge.
> rm(list=ls())
> N_id=10000
> N_tr=200
> T_tr=80
In order to have a code as general as possible, assume that we have some kind of randomness,
> set.seed(1)
> N=rpois(N_id,N_tr)
> N_traj=rpois(sum(N),T_tr)
By “observation”, we consider a driver Id., a Trajectory Id., and a location (latitude and longitude) at some specific dates (e.g. every 15 sec.). Again, just because we want some dataset to illustrate, swe will draw drivers’s home randomly (here uniformly on some square)
> origin_lat=runif(N_id,-5,5)
> origin_lon=runif(N_id,-5,5)
And, then, from those locations, we generate a 2-dimensional random walk,
> lat=lon=Traj_Id=rep(NA,sum(N_traj))
> Pers_Id=rep(NA,length(N_traj))
> s=1
> for(i in 1:N_id){Pers_Id[s:(s+N[i])]=i;s=s+N[i]}
> s=1
> for(i in 1:length(N_traj)){lat[s:(s+N_traj[i])]=origin_lat[Pers_Id[i]]+
+ cumsum(c(0,rnorm(N_traj[i]-1,0,sd=.2)));
+ lon[s:(s+N_traj[i])]=origin_lon[Pers_Id[i]]+
+ cumsum(c(0,rnorm(N_traj[i]-1,0,sd=.2)));
+ s=s+N_traj[i]}
We have something which looks like
# Names in the U.S., from James Smith to Jose Rodriguez
Two weeks ago, @mona published an interesting post on her blog, about a difficult question, What’s The Most Common Name In America? There were stats about first names, in the U.S., and last names, too. Those informations are – somehow – easy to get. But usually, it is more complicated to get the first and the last name together. For confidentiality issues ! Datasets – the ones I deal with – are supposed to be anonymized, so I never see the first and the last names. In a previous post, a few years ago, I did mention the so-called Social Security Death Master File. In that file, we have Social Security numbers, with the date of birth, the date of death as well as the first and the last name. So I did use those files to get stats about the first and the last names of American citizens. Of course, it is very restrictive. I have only U.S. citizens that have a Social Security number (which is not compulsary in the U.S. as far as I understood) and who passed away (as mentioned in the name of the dataset: the death master file). Another great thing about that dataset is that I have the date of birth, so I can look at some cohort effect (see opendata.stackexchange for an interesting discussion on that dataset).
# Extracting datasets from excel files in a zipped folder
The title of the post is a bit long, but that’s the problem I was facing this morning: importing dataset from files, online. I mean, it was not a “problem” (since I can always download, and extract manually the files), more a challenge (I should be able to do it in R, directly). The files are located on ressources-actuarielles.net, in a zip file. Those are mortality tables used in French speaking African countries, and I guess that one problem came from special characters, such as “é” or “è”… When you open the zip file, you see a folder
and in that folder, several files that I would like to import
# How to import some parts of a large database
In the introduction of Computational Actuarial Science with R, there was a short paragraph on how could we import only some parts of a large database, by selecting specific variables. The trick was to use the following
> read.table.select.columns=function(datatablename,
I,sep=";"){
sep=sep,skip=0,nrows=1)
+ mycols=rep("NULL",ncol(datanc))
+ names(mycols)=names(datanc)
+ mycols[I]=NA
sep=sep,colClasses=mycols)
+ return(datat)}
For instance, if we use the same dataset as in the introduction, we can import only two variables of interest,
> loc="http://myweb.fsu.edu/jelsner/extspace/extremedatasince1899.csv"
"Wmax"),sep=",")
Region Wmax
1 Basin 105.56342
2 Basin 40.00000
3 Basin 35.41822
4 Basin 51.06743
5 Florida 87.34328
6 Basin 96.64138
7 Gulf 35.41822
8 US 35.41822
9 US 87.34328
10 US 106.35318
> dim(dt1)
[1] 2100 2
# R package for Computational Actuarial Science
A webpage for the book is now hosted on
http://cas.uqam.ca/
So far, it is a very basic page, but information regarding the package can be found there. For instance, to install the package, with all the datasets, the R code is
> install.packages("CASdatasets", repos = "http://cas.uqam.ca/pub/R/")
The reference manual provides a description of all datasets.
# Regression on categorical variables
This morning, Stéphane asked me tricky question about extracting coefficients from a regression with categorical explanatory variates. More precisely, he asked me if it was possible to store the coefficients in a nice table, with information on the variable and the modality (those two information being in two different columns). Here is some code I did to produce the table he was looking for, but I guess that some (much) smarter techniques can be used (comments – see below – are open). Consider the following dataset
> base
x sex hair
1 1 H Black
2 4 F Brown
3 6 F Black
4 6 H Black
5 10 H Brown
6 5 H Blonde
with two factors,
> levels(base$hair) [1] "Black" "Blonde" "Brown" > levels(base$sex)
[1] "F" "H"
Let us run a (standard linear) regression,
> reg=lm(x~hair+sex,data=base)
which is here
> summary(reg)
Call:
lm(formula = x ~ hair + sex, data = base)
Residuals:
1 2 3 4 5 6
-3.714e+00 -2.429e+00 2.429e+00 1.286e+00 2.429e+00 -2.220e-16
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.5714 3.4405 1.038 0.408
hairBlonde 0.2857 4.8655 0.059 0.959
hairBrown 2.8571 3.7688 0.758 0.528
sexH 1.1429 3.7688 0.303 0.790
Residual standard error: 4.071 on 2 degrees of freedom
Multiple R-squared: 0.2352, Adjusted R-squared: -0.9121
F-statistic: 0.205 on 3 and 2 DF, p-value: 0.886
If we want to extract the names of the factors (assuming here that there are no numbers in the name of the factor), and the values of the associated modality, one can use
> VARIABLE=c("",gsub("[-^0-9]", "", names(unlist(reg$xlevels)))) > MODALITY=c("",as.character(unlist(reg$xlevels)))
> names=data.frame(VARIABLE,MODALITY,NOMVAR=c(
+ "(Intercept)",paste(VARIABLE,MODALITY,sep="")[-1]))
> regression=data.frame(NOMVAR=names(coefficients(reg)),
+ COEF=as.numeric(coefficients(reg)))
> merge(names,regression,all.x=TRUE)
NOMVAR VARIABLE MODALITE COEF
1 (Intercept) 3.5714286
2 hairBlack hair Black NA
3 hairBlonde hair Blonde 0.2857143
4 hairBrown hair Brown 2.8571429
5 sexF sex F NA
6 sexH sex H 1.1428571
or, if we want modalities exluding references,
> merge(names,regression)
NOMVAR VARIABLE MODALITE COEF
1 (Intercept) 3.5714286
2 hairBlonde hair Blonde 0.2857143
3 hairBrown hair Brown 2.8571429
4 sexH sex H 1.1428571
In order to reproduce the table Stéphane sent me, let us use the following code to produce an html table,
> library(xtable)
> htlmtable <- xtable(merge(names,regression))
> print(htlmtable,type="html")
NOMVAR VARIABLE MODALITY COEF
1 (Intercept) 3.57
2 hairBlonde hair Blonde 0.29
3 hairBrown hair Brown 2.86
4 sexH sex H 1.14
So yes, it is possible to build a table with the variable, modalities, and coefficients. This function can be interesting on prospective mortality, when we do have a large number of modalities per factor (years, ages and year of birth). Consider the following datasets
> DEATH=read.table(
+ "http://freakonometrics.free.fr/DeathsSwitzerland.txt",
+ "http://freakonometrics.free.fr/ExposuresSwitzerland.txt",
> DEATH$Age=as.numeric(as.character(DEATH$Age))
> DEATH=DEATH[-which(is.na(DEATH$Age)),] > EXPOSURE$Age=as.numeric(as.character(EXPOSURE$Age)) > EXPOSURE=EXPOSURE[-which(is.na(EXPOSURE$Age)),]
> base=data.frame(y=as.factor(DEATH$Year),a=as.factor(DEATH$Age),
+ c=as.factor(DEATH$Year-DEATH$Age),D=DEATH$Total,E= EXPOSURE$Total)
> base=base[base$E>0,] and the following nonlinear model, based on Lee-Carter model (including a cohort effect), $N_{x,t}\sim\mathcal{P}(E_{x,t}\cdot \exp[\alpha_x+\beta_x \kappa_t + \gamma_x \delta_{t-x}])$ can be estimated using > library(gnm) > reg=gnm(D~a+Mult(a,y)+Mult(a,c),offset=log(E),family=poisson,data=base) In order to extract the 671 coefficients from the regresssion, > length(coefficients(reg)) [1] 671 (as properly as possible) we have to be careful: names of coefficients are not that simple to handle. For instance, we can see things like > coefficients(reg)[200] Mult(., year).age98 0.04203519 In order to extract them, define > na=length((reg$xlevels)$age) > ny=length((reg$xlevels)$year) > nc=length((reg$xlevels)$cohort) > VARIABLElong=c("",rep("age",na),rep("Mult(., year).age",na), + rep("Mult(a, .).y",ny), + rep("Mult(., cohort).age",na),rep("Mult(age, .).cohort",nc)) > VARIABLEshort=c("",rep("age",na),rep("age",na),rep("year",ny), + rep("age",na),rep("cohort",nc)) > MODALITY=c("",(reg$xlevels)$age,(reg$xlevels)$age, + (reg$xlevels)$year,(reg$xlevels)$age,(reg$xlevels)$cohort) > names=data.frame(VARIABLElong,VARIABLEshort, + MODALITY,NOMVAR=c("(Intercept)",paste(VARIABLElong,MODALITY,sep="")[-1])) > regression=data.frame(NOMVAR=names(coefficients(reg)), + COEF=as.numeric(coefficients(reg))) Here we go, now we have the coefficients from the regression in a nice table, > outputreg=merge(names,regression) > outputreg[1:10,] NOMVAR VARIABLElong VARIABLEshort MODALITY COEF 1 (Intercept) -8.22225458 2 age1 age age 1 -0.87495451 3 age10 age age 10 -1.67145704 4 age100 age age 100 4.91041650 5 age11 age age 11 -1.00186990 6 age12 age age 12 -1.05953497 7 age13 age age 13 -0.90952859 8 age14 age age 14 0.02880668 9 age15 age age 15 0.42830738 10 age16 age age 16 1.35961403 It is now possible to plot all the coefficients, as functions of the age, the year of observation, or the year of birth. For instance, for the standard average age effect (namely $\alpha_x$ as a function of $x$), we can use > typevariable=as.character(unique(outputreg$VARIABLElong))
> basegraph=outputreg[outputreg$VARIABLElong==typevariable[2],] > x=as.numeric(as.character(basegraph$MODALITY))
> y=basegraph$COEF > plot(x,y,type="p",col="blue",xlab="Age") while the cohort effect ($\delta_t$ as a function of $t$) is obtained using > basegraph=outputreg[outputreg$VARIABLElong==typevariable[5],]
> x=as.numeric(as.character(basegraph$MODALITY)) > y=basegraph$COEF
> plot(x,y,type="p",col="blue",xlab="Cohort (year of birth)",ylim=c(0,10))
# Open data and ecological fallacy
A couple of days ago, on Twitter, @alung mentioned an old post I did publish on this blog about open-data, explaining how difficult it was to get access to data in France (the post, published almost 18 months ago can be found here, in French). And @alung was wondering if it was still that hard to access nice datasets. My first answer was that actually, people were more receptive, and I now have more people willing to share their data. And on the internet, amazing datasets can be found now very easily. For instance in France, some detailed informations can be found about qualitifications, houses and jobs, by small geographical areas, on http://www.recensement.insee.fr (thanks @coulmont for the link). And that is great for researchers (and anyone actually willing to check things by himself).
But one should be aware that those aggregate data might not be sufficient to build up econometric models, and to infere individual behaviors. Thinking that relationships observed for groups necessarily hold for individuals is a common fallacy (the so-called ” ecological fallacy“).
In a popular paper, Robinson (1950) discussed “ecological inference“, stressing the difference between ecological correlations (on groups) and individual correlations (see also Thorndike (1937)) He considered two aggregated quantities, per american state: the percent of the population that was foreign-born, and the percent that was literate. One dataset used in the paper was the following
> library(eco)
> data(forgnlit30)
> tail(forgnlit30)
Y X W1 W2 ICPSR
43 0.076931986 0.03097168 0.06834300 0.077206504 66
44 0.006617641 0.11479052 0.03568792 0.002847920 67
45 0.006991899 0.11459207 0.04151310 0.002524065 68
46 0.012793782 0.18491515 0.05690731 0.002785916 71
47 0.007322475 0.13196654 0.03589512 0.002978594 72
48 0.007917342 0.18816461 0.02949187 0.002916866 73
The correlation between foreign-born and literacy was
> cor(forgnlit30$X,1-forgnlit30$Y)
[1] 0.2069447
So it seems that there is a positive correlation, so a quick interpretation could be that in the 30’s, amercians were iliterate, but hopefully, literate immigrants got the idea to come in the US. But here, it is like in Simpson’s paradox, because actually, the sign should be negative, as obtained on individual studies. In the state-based-data study, correlation was positive mainly because foreign-born people tend to live in states where the native-born are relatively literate…
Hence, the problem is clearly how individuals were grouped. Consider the following set of individual observations,
> n=1000
> r=-.5
> Z=rmnorm(n,c(0,0),matrix(c(1,r,r,1),2,2))
> X=Z[,1]
> E=Z[,2]
> Y=3+2*X+E
> cor(X,Y)
[1] 0.8636764
Consider now some regrouping, e.g.
> I=cut(Z[,2],qnorm(seq(0,1,by=.05)))
> Yg=tapply(Y,I,mean)
> Xg=tapply(X,I,mean)
Then the correlation is rather different,
> cor(Xg,Yg)
[1] 0.1476422
Here we have a strong positive individual correlation, and a small (positive correlation) on grouped data, but almost anything is possible.
Models with random coefficients have been used to make ecological inferences. But that is a long story, andI will probably come back with a more detailed post on that topic, since I am still working on this with @coulmont (following some comments by @frbonnet on his post on recent French elections on http://coulmont.com/blog/).
# Open data might be a false good opportunity…
I am always surprised to see many people on Twitter tweeting about #opendata, e.g. @data4all, @usdatagov, @datapublicatwit, @ProPublica or @open3 among so many others… Initially, I was also very enthousiastic, but I have to admit thatopen data are rarely raw data. Which is what I am usually looking for, as a statistician…
Consider the following example: I was wondering (Valentine’s day is approaching)when will a man born in 1975 (say) get married – if he ever gets married ?More technically, I was looking for a distribution of the age of first marriage (given the year of birth), including the proportion of men that will never get married, for that specific cohort.
The only data I found on the internet is the following, on statistics.gov.uk/
Note that we can also focus on women (e.g. here). Is it possible to use that opendata to get an estimation of the distribution of first marriage for some specific cohort ? (and to answer the question I asked). Here, we have two dimensions: on line , the year (of the marriage), and on column , the age of the man when he gets married. Assume that those were rawdata, i.e. that we have the number of marriages of men of age during the year .
We are interested at a longitudinal lecture of the table, i.e. consider some man born year , we want to estimate (or predict) the age he will get married, if he gets married. With raw data, we can do it… The first step is to build up triangles (to have a cohort vs. age lecture of the data), and then to consider a model, e.g.
where is a year effect, and is a cohort effect.
base=read.table("http://freakonometrics.free.fr/mariage-age-uk.csv",
m=base[1:16,]
m=m[,3:10]
m=as.matrix(m)
triangle=matrix(NA,nrow(m),ncol(m))
n=ncol(m)
for(i in 1:16){
triangle[i,]=diag(m[i-1+(1:n),])
}
triangle[nrow(m),1]=m[nrow(m),1]
triangle
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,] 12 104 222 247 198 132 51 34
[2,] 8 89 228 257 202 102 75 49
[3,] 4 80 209 247 168 129 92 50
[4,] 4 73 196 236 181 140 88 45
[5,] 3 78 242 206 161 114 68 47
[6,] 11 150 223 199 157 105 73 39
[7,] 12 117 194 183 136 96 61 36
[8,] 11 118 202 175 122 92 62 40
[9,] 15 147 218 162 127 98 72 48
[10,] 20 185 204 171 138 112 82 NA
[11,] 31 197 240 209 172 138 NA NA
[12,] 34 196 233 202 169 NA NA NA
[13,] 35 166 210 199 NA NA NA NA
[14,] 26 139 210 NA NA NA NA NA
[15,] 18 104 NA NA NA NA NA NA
[16,] 10 NA NA NA NA NA NA NA
Y=as.vector(triangle)
YEARS=seq(1918,1993,by=5)
AGES=seq(22,57,by=5)
X1=rep(YEARS,length(AGES))
X2=rep(AGES,each=length(YEARS))
reg=glm(Y~as.factor(X1)+as.factor(X2),family="poisson")
summary(reg)
Call:
glm(formula = Y ~ as.factor(X1) + as.factor(X2), family = "poisson")
Deviance Residuals:
Min 1Q Median 3Q Max
-5.4502 -1.1611 -0.0603 1.0471 4.6214
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 2.8300461 0.0712160 39.739 < 2e-16 ***
as.factor(X1)1923 0.0099503 0.0446105 0.223 0.823497
as.factor(X1)1928 -0.0212236 0.0449605 -0.472 0.636891
as.factor(X1)1933 -0.0377019 0.0451489 -0.835 0.403686
as.factor(X1)1938 -0.0844692 0.0456962 -1.848 0.064531 .
as.factor(X1)1943 -0.0439519 0.0452209 -0.972 0.331082
as.factor(X1)1948 -0.1803236 0.0468786 -3.847 0.000120 ***
as.factor(X1)1953 -0.1960149 0.0470802 -4.163 3.14e-05 ***
as.factor(X1)1958 -0.1199103 0.0461237 -2.600 0.009329 **
as.factor(X1)1963 -0.0446620 0.0458508 -0.974 0.330020
as.factor(X1)1968 0.1192561 0.0450437 2.648 0.008107 **
as.factor(X1)1973 0.0985671 0.0472460 2.086 0.036956 *
as.factor(X1)1978 0.0356199 0.0520094 0.685 0.493423
as.factor(X1)1983 0.0004365 0.0617191 0.007 0.994357
as.factor(X1)1988 -0.2191428 0.0981189 -2.233 0.025520 *
as.factor(X1)1993 -0.5274610 0.3241477 -1.627 0.103689
as.factor(X2)27 2.0748202 0.0679193 30.548 < 2e-16 ***
as.factor(X2)32 2.5768802 0.0667480 38.606 < 2e-16 ***
as.factor(X2)37 2.5350787 0.0671736 37.739 < 2e-16 ***
as.factor(X2)42 2.2883203 0.0683441 33.482 < 2e-16 ***
as.factor(X2)47 1.9601540 0.0704276 27.832 < 2e-16 ***
as.factor(X2)52 1.5216903 0.0745623 20.408 < 2e-16 ***
as.factor(X2)57 1.0060665 0.0822708 12.229 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 5299.30 on 99 degrees of freedom
Residual deviance: 375.53 on 77 degrees of freedom
(28 observations deleted due to missingness)
AIC: 1052.1
Number of Fisher Scoring iterations: 5
Here, we have been able to derive and , where now denotes the cohort.
We can now predict the number of marriages per year, and per cohort
Here, given the cohort , the shape of is the following
Yp=predict(reg,type="response")
tYp=matrix(Yp,nrow(m),ncol(m))
tYp[16,]
tYp[16,]
[1] 10.00000 222.94525 209.32773 159.87855 115.06971 42.59102
[7] 18.70168 148.92360
The errors (Pearson error) look like that
Ep=residuals(reg,type="pearson")
(where the darker the blue, the smaller the residuals, and the darker the red, the higher the residuals). Obviously, we are missing something here, like a diagonal effect. But this is not the main problem here…
I guess that study here is not valid. The problem is that we deal with open data, and numbers of marriages are not given here: what is given is a he proportion of marriage of men of age during the year , with a yearly normalization. There is a constraint on lines, i.e. we observe
so that
This is mentioned in the title
It is still possible to consider a Poisson regression on the , but unfortunately, I do not think any interpretation is valid (unless demography did not change last century). For instance, the following sum
looks like that
apply(tYp,1,sum)
[1] 919.948 838.762 846.301 816.552 943.559 930.280 857.871 896.113
[9] 905.086 948.087 895.862 853.738 826.003 816.192 813.974 927.437
i.e. if we look at the graph
But I do not think we can interpret that sum as the probability (if we divide by 1,000) that a man in that cohort gets married…. And more basically, I cannot do anything with that dataset…
So open data might be interesting. The problem is that most of the time, the data are somehow normalized (or aggregated). And then, it becomes difficult to use them…
So I will have to work further to be able to write something (mathematically valid) on marriage strategy before Valentine’s day…. to be continued.
# Too large datasets for regression ? What about subsampling….
recently, a classmate working in an insurance company told me he had too large datasets to run simple regressions (GLM, which involves optimization issues), and that they were thinking of a reward for the one who will write the best R-code (at least the fastest). My first idea was to use subsampling techniques, saying that 10 regressions on 100,000 observations can take less time than a regression on 1,000,000 observations. And perhaps provide also better results…
• Time to run a regression, as a function of the number of observations
Here, I generate a dataset as follows
and we fit
where is a spline function (just to make it as general as possible, since in insurance ratemaking, we include continuous variates that do not influence claims frequency linearly in the score). Yes, there might be also useless variables, including one of them which is strongly correlated with one that has an impact in the regression. The code to generate the dataset is simply
> n=10000
> X1=rexp(n)
> X2=sample(c("A","B","C"),size=n,replace=TRUE)
> X3=runif(n)
> Z=rmnorm(n,c(0,0),matrix(c(1,0.8,.8,1),2,2))
> X4=Z[,1]
> X5=Z[,2]
> X6=X1^2
> E=runif(n)
> lambda=.2*X5-4*dbeta(X3,2,5)+X1+
+1*(X2=="A")-2*(X2=="B")-5*(X2=="C")
> Y=rpois(n,exp(lambda))
> base=data.frame(Y,X1,X2,X3,X4,X5,X6,E)
We would like the study the time it takes to run a regression, as a function of the size (i.e. the number of lines ) of the dataset.
> system.time( glm(Y~bs(X1)+X2+X3+X4+
+ X5+X6+offset(log(E)),family=poisson,
+ data=base) )
utilisateur système écoulé
0.25 0.00 0.25
Here, the time I look at is the last one. But so far, it was rather simple, but it is not the best model I can get. Let us use a stepwise (backward) variable selection,
> system.time( step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=poisson,
+ data=base)) )
Start: AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))
Step: AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + offset(log(E))
Df Deviance AIC
<none> 2236.0 2882.1
- X5 1 2240.1 2884.2
- X4 1 2244.1 2888.2
- X3 1 4783.2 5427.3
- X2 2 5311.4 5953.5
- bs(X1) 3 6273.7 6913.8
utilisateur système écoulé
1.82 0.03 1.86
Finally, from the first regression, we have points in black (based on 200 simulated datasets), and with a stepwise procedure, we have the points in red.
i.e. it might look linear (proportional), but if it was linear, then on a log-log scale, we should have also straigh lines, with slope 1,
Actually, it looks like a convex function.
The interpretation of that convexity might lead to misinterpretation. On the graph below on the left, on a dataset two times bigger than the previous one (black point) will be less than two times longer to run, while on the right, it will be more than two timess longer,
Convexity can simply be interpreted as “too large datasets take time, and too small too…”. Which is a first step: it should be interesting, in some cases, to run several regressions on smaller datasets….
• Running 100 regressions on 100 lines, or running 1 regression on 10,000 lines ?
Here, we have datasets with =200,000 lines. The questions is how long will it take if we subdived into subsamples (of equal size), and run regressions ?
> nk=trunc(n/k)rep(1:k,each=nk); nt=nk*k
> base=data.frame(Y[1:nt],X1[1:nt],
+ X2[1:nt],X3[1:nt],X4[1:nt],X5[1:nt],
+ X6[1:nt],E[1:nt],classe)
> system.time( for(j in 1:k){
+ glm(Y~bs(X1)+X2+X3+X4+X5+
+ X6+offset(log(E)),family=poisson
+ ,data=base,subset=classe==j) })
utilisateur système écoulé
1.31 0.00 1.31
> system.time( for(j in 1:k){
+ step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=
+ poisson,data=base,subset=classe==j)) })
Start: AIC=183.97
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))
[…]
Df Deviance AIC
<none> 117.15 213.04
- X2 2 250.15 342.04
- X3 1 251.00 344.89
- X4 1 420.63 514.53
- bs(X1) 3 626.84 716.74
utilisateur système écoulé
11.97 0.03 12.31
On the graph below, we have the time (y-axis, here on a log scale) it took to run regression on samples of size , as function of (x-axis), including the time it took to run the regression on a dataset of size which is the concentration of dots on the left (i.e. =1), both on the 6 regressors – in black – and with a strepwise procedure – in red. One has to keep in mind that I did not remove the printing option in the stepwise procedure, so it might be difficult to compare the two clouds (black vs. red). Nevertheless, we clearly see that if we run regression on samples of size , when is not too large, i.e. less than 10 or 15, it is not longer than the regression on =200,000 lines.
So here we see that running 100 regressions on 2,000 lines is longer than running 1 regression on 200,000 lines… But maybe we are not comparing things that are actually comparable: what if it takes a bit longer, but we strongely improve the quality of our estimators ?
• What about the quality of the output ?
Here, we consider only one dataset, with =100,000 lines (just to make it run a bit faster). And =20 subsets. Recall that the generated dataset is from
and we fit
Here, we plot here and a confidence interval, defined as
The lightblue segment is the initial estimator, while the blue one is obtained from the stepwise procedure. The grey area represent the estimation on the overall sample, while the segments on the right are the estimators (each on samples of size ).
We can see that we have much more volatility on those estimators, but the average (horizontal doted lines) are not so bad… The true value (i.e. the one used to generate the dataset is the dotter black horizontal line).
And if we repeat that on 1,000 simulated dataset, we obtaind the following distribution for (blue line), so we have an unbiased estimator of our parameter (the verticular line being here the true value), here including a stepwise procedure,
But if we add the the red curve is the average of the the previous one being now the clear blue line in the back, we see that taking average of estimators on subsamples is not bad at all, on the contrary,
and for those who think that the stepwise procedure is a mistake, here is what we get without it,
So what we can see is that running 20 regressions can take (a little) more time (from what we’ve seen earlier) than running only one on the whole dataset…. but it provides better estimates. So the tradeoff is not that simple, and maybe running several regressions on huge datasets can be a proper alternative.
# Les tables de mortalité
Chose promise, chose dûe, un court billet expliquant les principales tables utilisées,
• Les tables TD et TV 88-90
Ces tables datent un peu, et si je continue à en parler en cours, c’est parce qu’elles sont simples à utiliser (et pour continuer à me faire croire que je n’ai pas vieilli depuis mes études). Cette table est d’ailleurs tellement sérieuse qu’on la retrouve dans la loi (ici), dans un arrêté d’avril 1993. La table dite TD 88-90 (pour Décès) a été établie par l’INSEE suite aux observations réalisées entre 1988 et 1990 sur une population d’hommes. Elle était appliquée pour le calcul des primes des contrats d’assurance décès. La table dite TV 88-90 (pour Vie) a été établie par l’INSEE suite aux observations réalisées entre 1988 et 1990 sur une population de femmes. Elle était appliquée pour le calcul des primes des contrats d’assurance en cas de vie. Ces tables peuvent se récupérer à l’aide des codes suivants,
> TD=read.table("https://perso.univ-rennes1.fr/arthur.charpentier/TD8890.csv",
+ sep=";",header=TRUE)
Ces tables ont été remplacées par les tables dites TH et TF, respectivement.
• Les tables TH et TF 00-02
Ces tables ont été établies à partir des données INSEE de la population française entre 2000 et 2002 et ont été lissées. Ce sont des tables générationnelles, qui nécessitent un correctif d’âge pour tenir compte des écarts de mortalité entre générations. Elles sont applicables à partir du 1er janvier 2006. L’institut des actuaires a proposé une “notice d’utilisation” en ligne ici, et Cimon en avait parlé sur son blog ().
• Les tables TPRV 95
La table TPRV 93 (pour Table Prospective de Rente Viagère) est un extrait de la table dite plancher pour la tarification des contrats de rente viagère. Elle a été publiée par l’arrêté du 28 juillet 1993 (ici sans les annexes), et correspond à une table prospective qui retrace la mortalité des générations 1887 à 1993 (les tables prospectives sont au programme de Master 2).
La TPRV 93 représente la table complète de la génération 1950. La table est en ligne ici (en csv) lisible sous R avec le code suivant,
> TD=read.table("https://perso.univ-rennes1.fr/arthur.charpentier/TPRV.csv",
+ sep=";",header=TRUE)
Mais ce n’est que la première étape. On utilise ensuite un décallage d’âge.
On a parlé de ce point en TD (mais sur les taux de mortalité), c’est aussi ce qui s’appelle l’hypothèse de Rueff, et traduit la diminution des taux de mortalité par un rajeunissement (i.e. un gain sur les âges), i.e.
Notons qu’en fait, dépend du niveau des taux d’intérêt. L’ensemble des tables de décallage sont en ligne ici.
• Les tables TGH et TGF 05
Il s’agit ici, là encore, d’une population de rentiers, comme pour la (ici pour les 38 pages du code). Aussi, ces tables ont été construites sur une population différente des tables TH et TF (construites sur l’ensemble de la population française). La méthodologie est décrite ici.
|
2018-05-23 16:38:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5509286522865295, "perplexity": 4805.346803823614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865691.44/warc/CC-MAIN-20180523161206-20180523181206-00027.warc.gz"}
|
https://www.physicsforums.com/threads/how-does-a-frequency-multiplier-work.343272/
|
# How does a frequency multiplier work?
1. Oct 6, 2009
### rjsalmon
Hi,
I'm trying to work out how frequency multipliers physically work. What processes go on in order to multiply an input signal to a desired amount (ie 2 or 3 times the original frequency). I am intrested in thier use to generate a terahertz signal though I assume they work the same at most frequencies.
I hope someone can help.
2. Oct 6, 2009
### vk6kro
I can think of two types of frequency multiplier.
The first uses an amplifier that is deliberately driven into distortion so that a complex waveform is produced.
This will contain output on multiples of the input frequency.
Tuned circuits are then tuned to the required harmonic to recover this harmonic and reject others.
This is typically done in receivers and transmitters where a known stable signal is available but a multiple of that frequency is needed as a local oscillator for a mixer.
The other type of frequency multiplier is one that uses a free running oscillator near the desired high frequency. This oscillator frequency is divided by some exact amount using a digital divider.
This divided down signal is then compared with a known reference in a phase comparator and an error signal is sent back to the oscillator to pull it onto an exact multiple of the reference signal.
I also saw a reference to someone multiplying the frequency of a red laser to get green laser output. I have no idea how they did that.
3. Oct 6, 2009
### waht
Multiplication occurs when a sine wave is subjected to a non-linear device such a as diode that has an exponential response
$$i \sim e^v$$
If you know calculus, you can expand the exponential using Taylor series:
$$i \sim 1 + \frac{v^2}{2} + \frac{v^3}{6} ...+ \frac{v^n}{n!}$$
Notice, we have the v^2 term - if you square a sine wave
$$v = \sin(\omega t)$$
$$\sin^2(\omega t) = \frac{1}{2}(1-cos(2\omega t))$$
and hence the frequency is doubled. Note, that the higher order terms are usually negligible. It is possible to optimize the diode for the 3rd order term to triple the frequency or quadruple it. But the power goes down quickly.
Last edited: Oct 6, 2009
4. Oct 6, 2009
### waht
The t-rays are primarily generated by multiplication, however, the expense and quality of components that goes in making these is huge. A simple set up could cost you as much as a new car.
Instead, consider working at lower frequencies in the KHz, and MHz range which is cheap to do.
Also as vk6kro, mentioned, the light in green laser pointers is actually derived from an infrared laser shining into a non-linear crystal that generates the 2nd harmonic (green light), you can experiment with that:
Here is an example of a laser doubler:
Last edited by a moderator: Sep 25, 2014
5. Oct 6, 2009
### rjsalmon
Brilliant, thank you both very much. Its too difficult to find this information out online.
Rob
6. Oct 6, 2009
### halajeeb
Thank you for sharing your information
|
2017-12-11 13:35:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7556427121162415, "perplexity": 1225.0487796410755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513512.31/warc/CC-MAIN-20171211125234-20171211145234-00063.warc.gz"}
|
https://ysharifi.wordpress.com/2010/12/13/groups-of-order-n-with-gcdn-phin1-are-cyclic/
|
Groups of order n with gcd(n, phi(n))=1 are cyclic
Posted: December 13, 2010 in Elementary Algebra; Problems & Solutions, Groups and Fields
Tags: , , , ,
Let $\varphi$ be the Euler totient function. Before getting into the main problem we give a useful lemma.
Lemma. Let $H$ be a subgroup of a group $G.$ Let $N(H)$ and $C(H)$ be the normalizer and the centralizer of $H$ in $G.$ Then $C(H) \subseteq N(H)$ and $N(H)/C(H)$ is isomorphic to a subgroup of $Aut(H).$
Proof. Define $f: N(H) \longrightarrow Aut(H)$ by $f(x)(h)=xhx^{-1}$ for all $x \in N(H)$ and $h \in H.$ See that $f$ is a well-defined group homomorphism and $\ker f = C(H). \ \Box$
Problem. Let $G$ be a group of order $n.$ Prove that if $\gcd(n, \varphi(n))=1,$ then $G$ is cyclic.
Solution. The proof is by induction on $n.$ The case $n = 1$ is trivial. For $n > 1,$ since $\gcd(n, \varphi(n))=1,$ we must have $n=p_1 p_2 \cdots p_k$ for some distinct primes $p_i.$ Let $P$ be a Sylow $p_1$-subgroup of $G$ and let
$K = N(P)/C(P).$
By the lemma, $K$ is isomorphic to a subgroup of $Aut(P).$ But since $P$ is a cyclic group of order $p_1,$ we have $|Aut(P)|=p_1-1$ and thus $|K| \mid p_1 - 1.$ Hence $|K| \mid \varphi(n).$ Clearly $|K| \mid n$ and so $|K|=1$ because $\gcd(n, \varphi(n))=1.$ So $N(H) = C(H)$ and thus, by the Burnside’s normal complement theorem, there exists a normal subgroup $Q$ of $G$ such that
$G=PQ, \ P \cap Q = \{1\}.$
Thus $|Q|=p_2 \cdots p_k < n$ and hence, by the induction hypothesis, $Q$ is cyclic. Let
$L = N(Q)/C(Q)= G/C(Q).$
Again, by the lemma, $L$ is isomorphic to a subgroup of $Aut(Q)$. Also, since $Q$ is cyclic,
$|Aut(Q)|= (p_2-1) \cdots (p_k-1) \mid \varphi(n).$
Therefore $|L| \mid \varphi(n).$ But clearly $|L| \mid n$ and thus $|L|=1,$ i.e. $G = C(Q).$ So $Q$ is in the center of $G$ and therefore $G$ is abelian because $P$ is abelian, $Q$ is in the center of $G$ and $G=PQ.$ Hence $G \cong P \times Q$ and so $G$ is cyclic because both $P$ and $Q$ are cyclic with coprime orders. $\Box$
Remark 1. By the fundamental theorem of finite abelian groups, a group of square-free order is abelian if and only if it is cyclic. This result can also be used instead of the last line of the solution.
Remark 2. The converse of the problem is also true, i.e. if $n$ is a positive integer and the only group of order $n$ is $\mathbb{Z}/n \mathbb{Z},$ then $\gcd(n, \varphi(n))=1.$
|
2018-03-17 14:22:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 71, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9590562582015991, "perplexity": 96.36585143265079}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645177.12/warc/CC-MAIN-20180317135816-20180317155816-00588.warc.gz"}
|
http://mathhelpforum.com/number-theory/163222-floor-function.html
|
Math Help - Floor Function
1. Floor Function
Prove:
$\displaystyle\left \lfloor \frac{n}{2} \right \rfloor=\frac{n-1}{2} \ \mbox{if n is odd}$
Since n is odd, $\displaystyle n=2p+1 \ \ni \ p\in\mathbb{Z}$
$\displaystyle\left \lfloor \frac{2p+1}{2} \right \rfloor$ but I am not sure how that will help.
2. As a hint, consider $\frac{2p+1}{2}=p+\frac{1}{2}$. Then, $\lfloor p+\frac{1}{2}\rfloor=p$.
3. I have that down as well but don't see the connection.
4. Then what is $\frac{n-1}{2}$?
It is $\frac{n-1}{2}=\frac{(2p+1)-1}{2}=\frac{2p}{2}=p$.
|
2014-10-02 06:48:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8416334390640259, "perplexity": 713.8435113447816}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663718.7/warc/CC-MAIN-20140930004103-00287-ip-10-234-18-248.ec2.internal.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/eect.2016004
|
# American Institute of Mathematical Sciences
June 2016, 5(2): 251-272. doi: 10.3934/eect.2016004
## On a parabolic-hyperbolic filter for multicolor image noise reduction
1 Taras Shevchenko National University of Kyiv, Faculty of Cybernetics, 4D Glushkov Ave, 03680 Kyiv, Ukraine 2 Karlsruhe Institute of Technology, Department of Mathematics, Englerstrasse 2, 76131 Karlsruhe, Germany
Received March 2016 Revised May 2016 Published June 2016
We propose a novel PDE-based anisotropic filter for noise reduction in multicolor images. It is a generalization of Nitzberg & Shiota's (1992) model being a hyperbolic relaxation of the well-known parabolic Perona & Malik's filter (1990). First, we consider a `spatial' mollifier-type regularization of our PDE system and exploit the maximal $L^{2}$-regularity theory for non-autonomous forms to prove a well-posedness result both in weak and strong settings. Again, using the maximal $L^{2}$-regularity theory and Schauder's fixed point theorem, respective solutions for the original quasilinear problem are obtained and the uniqueness of solutions with a bounded gradient is proved. Finally, the long-time behavior of our model is studied.
Citation: Valerii Maltsev, Michael Pokojovy. On a parabolic-hyperbolic filter for multicolor image noise reduction. Evolution Equations & Control Theory, 2016, 5 (2) : 251-272. doi: 10.3934/eect.2016004
##### References:
[1] L. Alvarez, F. Guichard, P.-L. Lions and J.-M. Morel, Axioms and fundamental equations of image processing,, Archive for Rational Mechanics and Analysis, 123 (1993), 199. doi: 10.1007/BF00375127. Google Scholar [2] H. Amann, Compact embeddings of vector-valued Sobolev and Besov spaces,, Glasnik Matematički, 35 (2000), 161. Google Scholar [3] H. Amann, Non-local quasi-linear parabolic equations,, Russian Mathematical Surveys, 60 (2005), 1021. doi: 10.1070/RM2005v060n06ABEH004279. Google Scholar [4] H. Amann, Time-delayed Perona-Malik type problems,, Acta Mathematica Universitatis Comenianae, 76 (2007), 15. Google Scholar [5] F. Andreu, C. Ballester, V. Caselles and J. M. Mazón, Minimizing total variational flow,, Differential and Integral Equations, 14 (2001), 321. Google Scholar [6] F. Andreu, C. Ballester, V. Caselles and J. M. Mazón, Some qualitative properties for the total variation flow,, Journal of Functional Analysis, 188 (2002), 516. doi: 10.1006/jfan.2001.3829. Google Scholar [7] W. Arendt and R. Chill, Global existence for quasilinear diffusion equations in isotropic nondivergence form,, Annali della Scuola Normale Superiore di Pisa (5), 9 (2010), 523. Google Scholar [8] V. Barbu, Nonlinear Differential Equations Of Monotone Types in Banach Spaces,, Springer Monographs in Mathematics, (2010). doi: 10.1007/978-1-4419-5542-5. Google Scholar [9] A. Belahmidi, Équations Aux Dérivées Partielles Appliquées à la Restauration et à L'agrandissement des Images,, PhD thesis, (2003). Google Scholar [10] A. Belahmidi and A. Chambolle, Time-delay regularization of anisotropic diffusion and image processing,, ESAIM: Mathematical Modelling and Numerical Analysis, 39 (2005), 231. doi: 10.1051/m2an:2005010. Google Scholar [11] A. Belleni-Morante and A. C. McBride, Applied Nonlinear Semigroups: An Introduction,, Wiley Series in Mathematical Methods in Practice, (1998). Google Scholar [12] G. Bellettini, V. Caselles and M. Novaga, The total variation flow in $\mathbbR^N$,, Journal of Differential Equations, 184 (2002), 475. doi: 10.1006/jdeq.2001.4150. Google Scholar [13] M. Burger, A. C. G. Menucci, S. Osher and M. Rumpf (eds.), Level Set and PDE Based Reconstruction Methods in Imaging, vol. 2090 of Lecture Notes in Mathematics,, Springer International Publishing, (1992). Google Scholar [14] J. Canny, Finding Edges and Lines in Images,, Technical Report 720, (1983). Google Scholar [15] G. R. Cattaneo, Sur une forme de l'équation de la chaleur éliminant le paradoxe d'une propagation instantanée,, Comptes Rendus de l'Académie des Sciences, 247 (1958), 431. Google Scholar [16] F. Catté, P.-L. Lions, J.-M. Morel and T. Coll, Image selective smoothing and edge detection by nonlinear diffusion,, SIAM Journal on Numerical Analysis, 29 (1992), 182. doi: 10.1137/0729012. Google Scholar [17] G. H. Cottet and M. El Ayyadi, A Volterra type model for image processing,, IEEE Transactions on Image Processing, 7 (1998), 292. doi: 10.1109/83.661179. Google Scholar [18] R. Dautray and J.-L. Lions, Evolution Problems, vol. 5 of Mathematical Analysis and Numerical Methods for Science and Technology,, Springer-Verlag, (1992). doi: 10.1007/978-3-642-58090-1. Google Scholar [19] D. Dier, Non-autonomous maximal regularity for forms of bounded variation,, Journal of Mathematical Analysis and Applications, 425 (2015), 33. doi: 10.1016/j.jmaa.2014.12.006. Google Scholar [20] M. E. Gurtin and A. C. Pipkin, A general theory of heat conduction with finite wave speeds,, Archive for Rational Mechanics and Analysis, 31 (1968), 113. doi: 10.1007/BF00281373. Google Scholar [21] A. Handlovičová, K. Mikula and F. Sgallari, Variational numerical methods for solving nonlinear diffusion equations arising in image processing,, Journal of Visual Communication and Image Representation, 13 (2002), 217. Google Scholar [22] M. Hieber and M. Murata, The $L^p$-approach to the fluid-rigid body interaction problem for compressible fluids,, Evolution Equations and Control Theory, 4 (2015), 69. doi: 10.3934/eect.2015.4.69. Google Scholar [23] M. Hochbruck, T. Jahnke and R. Schnaubelt, Convergence of an ADI splitting for Maxwell's equations,, Numerische Mathematik, 129 (2015), 535. doi: 10.1007/s00211-014-0642-0. Google Scholar [24] S. L. Keeling and R. Stollberger, Nonlinear anisotropic diffusion filtering for multiscale edge enhancement,, Inverse Problems, 18 (2002), 175. doi: 10.1088/0266-5611/18/1/312. Google Scholar [25] D. Marr and E. Hildreth, Theory of edge detection,, Proceedings of the Royal Society B, 207 (1980), 187. doi: 10.1098/rspb.1980.0020. Google Scholar [26] S. A. Morris, The Schauder-Tychonoff fixed point theorem and applications,, Matematický Časopis, 25 (1975), 165. Google Scholar [27] M. Nitzberg and T. Shiota, Nonlinear image filtering with edge and corner enhancement,, IEEE Transactions on Pattern Analysis and Machine Intelligence, 14 (1992), 826. doi: 10.1109/34.149593. Google Scholar [28] T. Ohkubo, Regularity of solutions to hyperbolic mixed problems with uniformly characteristic boundary,, Hokkaido Mathematical Journal, 10 (1981), 93. doi: 10.14492/hokmj/1381758116. Google Scholar [29] P. Perona and J. Malik, Scale space and edge detection using anisotropic diffusion,, IEEE Trans. Pattern Anal. Machine Intell., 12 (1990), 629. doi: 10.1109/34.56205. Google Scholar [30] J. Prüss, Maximal regularity of linear vector-valued parabolic Volterra equations,, Journal of Integral Equations and Applications, 3 (1991), 63. doi: 10.1216/jiea/1181075601. Google Scholar [31] J. Prüss, Evolutionary Integral Equations and Applications, vol. 87 of Monographs in Mathematics,, Birkhäuser Verlag, (1993). doi: 10.1007/978-3-0348-8570-6. Google Scholar [32] L. I. Rudin, S. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms,, Physica D: Nonlinear Phenomena, 60 (1992), 259. doi: 10.1016/0167-2789(92)90242-F. Google Scholar [33] G. Savaré, Regularity results for elliptic equations in Lipschitz domains,, Journal of Functional Analysis, 152 (1998), 176. doi: 10.1006/jfan.1997.3158. Google Scholar [34] D. W. Scott, Multivariate Density Estimation: Theory, Practice, and Visualization,, 2nd edition, (). Google Scholar [35] P. Secchi, Well-posedness of characteristic symmetric hyperbolic systems,, Archive for Rational Mechanics and Analysis, 134 (1996), 155. doi: 10.1007/BF00379552. Google Scholar [36] K. Takezawa, Introduction to Nonparametric Regression,, Wiley Series in Probability and Mathematical Statistics, (2006). Google Scholar [37] J. Weickert, Anisotropic Diffusion in Image Processing,, B. G. Teubner, (1998). Google Scholar [38] A. P. Witkin, Scale-space filtering,, Readings in Computer Vision: Issues, (1987), 329. doi: 10.1016/B978-0-08-051581-6.50036-2. Google Scholar [39] R. Zacher, Maximal regularity of type $L_p$ for abstract parabolic Volterra equations,, Journal of Evolution Equations, 5 (2005), 79. doi: 10.1007/s00028-004-0161-z. Google Scholar
show all references
##### References:
[1] L. Alvarez, F. Guichard, P.-L. Lions and J.-M. Morel, Axioms and fundamental equations of image processing,, Archive for Rational Mechanics and Analysis, 123 (1993), 199. doi: 10.1007/BF00375127. Google Scholar [2] H. Amann, Compact embeddings of vector-valued Sobolev and Besov spaces,, Glasnik Matematički, 35 (2000), 161. Google Scholar [3] H. Amann, Non-local quasi-linear parabolic equations,, Russian Mathematical Surveys, 60 (2005), 1021. doi: 10.1070/RM2005v060n06ABEH004279. Google Scholar [4] H. Amann, Time-delayed Perona-Malik type problems,, Acta Mathematica Universitatis Comenianae, 76 (2007), 15. Google Scholar [5] F. Andreu, C. Ballester, V. Caselles and J. M. Mazón, Minimizing total variational flow,, Differential and Integral Equations, 14 (2001), 321. Google Scholar [6] F. Andreu, C. Ballester, V. Caselles and J. M. Mazón, Some qualitative properties for the total variation flow,, Journal of Functional Analysis, 188 (2002), 516. doi: 10.1006/jfan.2001.3829. Google Scholar [7] W. Arendt and R. Chill, Global existence for quasilinear diffusion equations in isotropic nondivergence form,, Annali della Scuola Normale Superiore di Pisa (5), 9 (2010), 523. Google Scholar [8] V. Barbu, Nonlinear Differential Equations Of Monotone Types in Banach Spaces,, Springer Monographs in Mathematics, (2010). doi: 10.1007/978-1-4419-5542-5. Google Scholar [9] A. Belahmidi, Équations Aux Dérivées Partielles Appliquées à la Restauration et à L'agrandissement des Images,, PhD thesis, (2003). Google Scholar [10] A. Belahmidi and A. Chambolle, Time-delay regularization of anisotropic diffusion and image processing,, ESAIM: Mathematical Modelling and Numerical Analysis, 39 (2005), 231. doi: 10.1051/m2an:2005010. Google Scholar [11] A. Belleni-Morante and A. C. McBride, Applied Nonlinear Semigroups: An Introduction,, Wiley Series in Mathematical Methods in Practice, (1998). Google Scholar [12] G. Bellettini, V. Caselles and M. Novaga, The total variation flow in $\mathbbR^N$,, Journal of Differential Equations, 184 (2002), 475. doi: 10.1006/jdeq.2001.4150. Google Scholar [13] M. Burger, A. C. G. Menucci, S. Osher and M. Rumpf (eds.), Level Set and PDE Based Reconstruction Methods in Imaging, vol. 2090 of Lecture Notes in Mathematics,, Springer International Publishing, (1992). Google Scholar [14] J. Canny, Finding Edges and Lines in Images,, Technical Report 720, (1983). Google Scholar [15] G. R. Cattaneo, Sur une forme de l'équation de la chaleur éliminant le paradoxe d'une propagation instantanée,, Comptes Rendus de l'Académie des Sciences, 247 (1958), 431. Google Scholar [16] F. Catté, P.-L. Lions, J.-M. Morel and T. Coll, Image selective smoothing and edge detection by nonlinear diffusion,, SIAM Journal on Numerical Analysis, 29 (1992), 182. doi: 10.1137/0729012. Google Scholar [17] G. H. Cottet and M. El Ayyadi, A Volterra type model for image processing,, IEEE Transactions on Image Processing, 7 (1998), 292. doi: 10.1109/83.661179. Google Scholar [18] R. Dautray and J.-L. Lions, Evolution Problems, vol. 5 of Mathematical Analysis and Numerical Methods for Science and Technology,, Springer-Verlag, (1992). doi: 10.1007/978-3-642-58090-1. Google Scholar [19] D. Dier, Non-autonomous maximal regularity for forms of bounded variation,, Journal of Mathematical Analysis and Applications, 425 (2015), 33. doi: 10.1016/j.jmaa.2014.12.006. Google Scholar [20] M. E. Gurtin and A. C. Pipkin, A general theory of heat conduction with finite wave speeds,, Archive for Rational Mechanics and Analysis, 31 (1968), 113. doi: 10.1007/BF00281373. Google Scholar [21] A. Handlovičová, K. Mikula and F. Sgallari, Variational numerical methods for solving nonlinear diffusion equations arising in image processing,, Journal of Visual Communication and Image Representation, 13 (2002), 217. Google Scholar [22] M. Hieber and M. Murata, The $L^p$-approach to the fluid-rigid body interaction problem for compressible fluids,, Evolution Equations and Control Theory, 4 (2015), 69. doi: 10.3934/eect.2015.4.69. Google Scholar [23] M. Hochbruck, T. Jahnke and R. Schnaubelt, Convergence of an ADI splitting for Maxwell's equations,, Numerische Mathematik, 129 (2015), 535. doi: 10.1007/s00211-014-0642-0. Google Scholar [24] S. L. Keeling and R. Stollberger, Nonlinear anisotropic diffusion filtering for multiscale edge enhancement,, Inverse Problems, 18 (2002), 175. doi: 10.1088/0266-5611/18/1/312. Google Scholar [25] D. Marr and E. Hildreth, Theory of edge detection,, Proceedings of the Royal Society B, 207 (1980), 187. doi: 10.1098/rspb.1980.0020. Google Scholar [26] S. A. Morris, The Schauder-Tychonoff fixed point theorem and applications,, Matematický Časopis, 25 (1975), 165. Google Scholar [27] M. Nitzberg and T. Shiota, Nonlinear image filtering with edge and corner enhancement,, IEEE Transactions on Pattern Analysis and Machine Intelligence, 14 (1992), 826. doi: 10.1109/34.149593. Google Scholar [28] T. Ohkubo, Regularity of solutions to hyperbolic mixed problems with uniformly characteristic boundary,, Hokkaido Mathematical Journal, 10 (1981), 93. doi: 10.14492/hokmj/1381758116. Google Scholar [29] P. Perona and J. Malik, Scale space and edge detection using anisotropic diffusion,, IEEE Trans. Pattern Anal. Machine Intell., 12 (1990), 629. doi: 10.1109/34.56205. Google Scholar [30] J. Prüss, Maximal regularity of linear vector-valued parabolic Volterra equations,, Journal of Integral Equations and Applications, 3 (1991), 63. doi: 10.1216/jiea/1181075601. Google Scholar [31] J. Prüss, Evolutionary Integral Equations and Applications, vol. 87 of Monographs in Mathematics,, Birkhäuser Verlag, (1993). doi: 10.1007/978-3-0348-8570-6. Google Scholar [32] L. I. Rudin, S. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms,, Physica D: Nonlinear Phenomena, 60 (1992), 259. doi: 10.1016/0167-2789(92)90242-F. Google Scholar [33] G. Savaré, Regularity results for elliptic equations in Lipschitz domains,, Journal of Functional Analysis, 152 (1998), 176. doi: 10.1006/jfan.1997.3158. Google Scholar [34] D. W. Scott, Multivariate Density Estimation: Theory, Practice, and Visualization,, 2nd edition, (). Google Scholar [35] P. Secchi, Well-posedness of characteristic symmetric hyperbolic systems,, Archive for Rational Mechanics and Analysis, 134 (1996), 155. doi: 10.1007/BF00379552. Google Scholar [36] K. Takezawa, Introduction to Nonparametric Regression,, Wiley Series in Probability and Mathematical Statistics, (2006). Google Scholar [37] J. Weickert, Anisotropic Diffusion in Image Processing,, B. G. Teubner, (1998). Google Scholar [38] A. P. Witkin, Scale-space filtering,, Readings in Computer Vision: Issues, (1987), 329. doi: 10.1016/B978-0-08-051581-6.50036-2. Google Scholar [39] R. Zacher, Maximal regularity of type $L_p$ for abstract parabolic Volterra equations,, Journal of Evolution Equations, 5 (2005), 79. doi: 10.1007/s00028-004-0161-z. Google Scholar
[1] Kristian Bredies. Weak solutions of linear degenerate parabolic equations and an application in image processing. Communications on Pure & Applied Analysis, 2009, 8 (4) : 1203-1229. doi: 10.3934/cpaa.2009.8.1203 [2] John B. Greer, Andrea L. Bertozzi. $H^1$ Solutions of a class of fourth order nonlinear equations for image processing. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 349-366. doi: 10.3934/dcds.2004.10.349 [3] Barbara Abraham-Shrauner. Exact solutions of nonlinear partial differential equations. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 577-582. doi: 10.3934/dcdss.2018032 [4] Luca Calatroni, Bertram Düring, Carola-Bibiane Schönlieb. ADI splitting schemes for a fourth-order nonlinear partial differential equation from image processing. Discrete & Continuous Dynamical Systems - A, 2014, 34 (3) : 931-957. doi: 10.3934/dcds.2014.34.931 [5] Wendong Wang, Liqun Zhang. The $C^{\alpha}$ regularity of weak solutions of ultraparabolic equations. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 1261-1275. doi: 10.3934/dcds.2011.29.1261 [6] Eugenia N. Petropoulou, Panayiotis D. Siafarikas. Polynomial solutions of linear partial differential equations. Communications on Pure & Applied Analysis, 2009, 8 (3) : 1053-1065. doi: 10.3934/cpaa.2009.8.1053 [7] Arnulf Jentzen. Taylor expansions of solutions of stochastic partial differential equations. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 515-557. doi: 10.3934/dcdsb.2010.14.515 [8] Nguyen Thieu Huy, Ngo Quy Dang. Dichotomy and periodic solutions to partial functional differential equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (8) : 3127-3144. doi: 10.3934/dcdsb.2017167 [9] Yukang Chen, Changhua Wei. Partial regularity of solutions to the fractional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5309-5322. doi: 10.3934/dcds.2016033 [10] Xia Huang. Stable weak solutions of weighted nonlinear elliptic equations. Communications on Pure & Applied Analysis, 2014, 13 (1) : 293-305. doi: 10.3934/cpaa.2014.13.293 [11] Jiahong Wu. Regularity results for weak solutions of the 3D MHD equations. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 543-556. doi: 10.3934/dcds.2004.10.543 [12] Li Ma, Lin Zhao. Regularity for positive weak solutions to semi-linear elliptic equations. Communications on Pure & Applied Analysis, 2008, 7 (3) : 631-643. doi: 10.3934/cpaa.2008.7.631 [13] Geng Chen, Yannan Shen. Existence and regularity of solutions in nonlinear wave equations. Discrete & Continuous Dynamical Systems - A, 2015, 35 (8) : 3327-3342. doi: 10.3934/dcds.2015.35.3327 [14] Tôn Việt Tạ. Non-autonomous stochastic evolution equations in Banach spaces of martingale type 2: Strict solutions and maximal regularity. Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4507-4542. doi: 10.3934/dcds.2017193 [15] Hernán R. Henríquez, Claudio Cuevas, Alejandro Caicedo. Asymptotically periodic solutions of neutral partial differential equations with infinite delay. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2031-2068. doi: 10.3934/cpaa.2013.12.2031 [16] Kexue Li, Jigen Peng, Junxiong Jia. Explosive solutions of parabolic stochastic partial differential equations with lévy noise. Discrete & Continuous Dynamical Systems - A, 2017, 37 (10) : 5105-5125. doi: 10.3934/dcds.2017221 [17] Jochen Merker. Strong solutions of doubly nonlinear Navier-Stokes equations. Conference Publications, 2011, 2011 (Special) : 1052-1060. doi: 10.3934/proc.2011.2011.1052 [18] Francesca Da Lio. Remarks on the strong maximum principle for viscosity solutions to fully nonlinear parabolic equations. Communications on Pure & Applied Analysis, 2004, 3 (3) : 395-415. doi: 10.3934/cpaa.2004.3.395 [19] Yu-Zhu Wang, Yin-Xia Wang. Local existence of strong solutions to the three dimensional compressible MHD equations with partial viscosity. Communications on Pure & Applied Analysis, 2013, 12 (2) : 851-866. doi: 10.3934/cpaa.2013.12.851 [20] Mustafa Hasanbulli, Yuri V. Rogovchenko. Classification of nonoscillatory solutions of nonlinear neutral differential equations. Conference Publications, 2009, 2009 (Special) : 340-348. doi: 10.3934/proc.2009.2009.340
2018 Impact Factor: 1.048
|
2019-10-19 13:10:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6410442590713501, "perplexity": 5180.05881524291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986693979.65/warc/CC-MAIN-20191019114429-20191019141929-00147.warc.gz"}
|
https://settheory.mathtalks.org/david-fernandez-breton-algebraic-ramsey-theoretic-results-with-small-monochromatic-sets/
|
# David Fernández-Bretón: Algebraic Ramsey-theoretic results with small monochromatic sets
BIU seminar in Set Theory
November 5, 2018
Speaker: David J. Fernández Bretón (KGRC)
Title: Algebraic Ramsey-theoretic results with small monochromatic sets
Abstract: We will explore some (recent and not so recent; some positive,
some negative) Ramsey-type results (each of which is due to some subset
of the set {Komj\’ath, Hindman, Leader, H.S. Lee, P. Russell, Shelah, D.
Soukup, Strauss, Rinot, Vidnyánszky, myself}) where abelian groups are
coloured, and one attempts to obtain monochromatic sets defined in terms
of the group structure. We will focus specifically on two families of
very recent results: the first one concerns colouring groups with
uncountably many colours, attempting to obtain finite monochromatic
FS-sets; the second one concerns colouring groups (most of the time, our
group of interest is the real line $\mathbb R$ with its usual addition)
with finitely many colours, attempting to obtain countably infinite
monochromatic sumsets.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
2018-11-15 00:27:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7551958560943604, "perplexity": 8454.16499409687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742322.51/warc/CC-MAIN-20181114232605-20181115014605-00064.warc.gz"}
|
https://engineering.stackexchange.com/questions/2234/worm-gear-motor-selection-calculation
|
# Worm gear & motor selection calculation
I am trying to build a worm gear based on a single axis solar tracker for a worm gear box. I need some guidance choosing a worm gear & drive motor for the application. These are the steps I have explored so far - if I am wrong could someone please give links to material which gives a good explanation so I can learn and understand it better.
General equations for worm gear & motor:
1) $Efficiency = \frac{\text{output Power of worm gear}}{\text{input Power of worm gear}} = \frac{P_o}{P_\text{Input}}$
2) $\text{Output power} = P_o = 2*\pi*\text{Output speed} * \text{Output torque}/60$
• Output power in kW
• Output speed in rev/min
• Output torque in Nm
3) $\text{Input power} = P_\text{Input} = 2*\pi*\text{Input speed} * \text{Input torque}/60$
4) $\text{Net Torque}= T_\text{output} + T_\text{Input} + T_\text{holding} = 0$
In order to drive a 250 kW plant I have assumed 15 kW per row, this gives me 16 rows with 1 column.
The worm gear has to drive 16 rows with coupling with single column item.
Each panel weighs 20 kg and there are 60 panel per row which gives 1200 kg. Each row has a structural weight around 700 kg.
This gives a net weight of 1900 kg per row.
So, the load on the worm gear would be 1900*16 = 30400 kg.
Based on my calculations above, how can I calculate the exact motor & drive motor required. I need some specific relation here.
• Is this homework? | The assumption that you have to provide a force equal to the total mass is a bad one. Practical systems may be balanced around or near a centre of mass and/or counterweighted and/or could use springs or pneumatics if absolutely essential (which it should not be. | Using a single motor with mechanical coupling makes no sense in modern practice. A motor per panel or set of panels is far far more likely for reasons or reliability cost mass danger and more. i| Basic formula in all suh is power = force x distance per time and work or energy = force x distance. .... – Russell McMahon Mar 28 '15 at 9:46
• Bonus: Power - watts ~= torque in kg.m x RPM. – Russell McMahon Mar 28 '15 at 9:46
• @RussellMcMahon can you give some appropriate calculation part – user50949 Mar 30 '15 at 3:48
• You have to do your part. I asked i this is "homework" (or similar). You still get answers but they differ in approach. | You need to show you are thinking and understanding and show what calculations you have done. Your question re "giving some appropriate calculations part" does not indicate that you have taken ANY notice of what I said above. I made suggestions on balancing, on using multiple motors and on a formula for power from RPM and torque. YOU need to provide some more input now. | Is this an assignment or homework. If so, when is it due. – Russell McMahon Mar 30 '15 at 5:11
• @RussellMcMahon : I am just student & doing internship . Can you explain me with example.What are input you need from my side.Basic calculation i given here.Other involved formula if known need to mention. question is whether i am on right track of calculation or not. If not suggest with relevant link – user50949 Mar 31 '15 at 4:41
|
2019-07-19 17:37:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5016757249832153, "perplexity": 1215.1753891776536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526324.57/warc/CC-MAIN-20190719161034-20190719183034-00556.warc.gz"}
|
https://philippmuens.com/minimax-and-mcts/
|
Do you remember your childhood days when you discovered the infamous game Tic-Tac-Toe and played it with your friends over and over again?
You might’ve wondered if there’s a certain strategy you can exploit that lets you win all the time (or at least force a draw). Is there such an algorithm that will show you how you can defeat your opponent at any given time?
It turns out there is. To be precise there are a couple of algorithms which can be utilized to predict the best possible moves in games such as Tic-Tac-Toe, Connect Four, Chess and Go among others. One such family of algorithms leverages tree search and operates on game state trees.
In this blog post we’ll discuss 2 famous tree search algorithms called Minimax and Monte Carlo Tree Search (abbreviated to MCTS). We’ll start our journey into tree search algorithms by discovering the intuition behind their inner workings. After that we’ll see how Minimax and MCTS can be used in modern game implementations to build sophisticated Game AIs. We’ll also shed some light into the computational challenges we’ll face and how to handle them via performance optimization techniques.
Let’s imagine that you’re playing some games of Tic-Tac-Toe with your friends. While playing you’re wondering what the optimal strategy might be. What’s the best move you should pick in any given situation?
Generally speaking there are 2 modes you can operate in when determining the next move you want to play:
Aggressive:
• Play a move which will cause an immediate win (if possible)
• Play a move which sets up a future winning situation
Defensive:
• Play a move which prevents your opponent from winning in the next round (if possible)
• Play a move which prevents your opponent from setting up a future winning situation in the next round
These modes and their respective actions are basically the only strategies you need to follow to win the game of Tic-Tac-Toe.
The “only” thing you need to do is to look at the current game state you’re in and play simulations through all the potential next moves which could be played. You do this by pretending that you’ve played a given move and then continue playing the game until the end, alternating between the X and O player. While doing that you’re building up a game tree of all the possible moves you and your opponent would play.
The following illustration shows a simplified version of such a game tree:
Note that for the rest of this post we’ll only use simplified game tree examples to save screen space
Of course, the set of strategic rules we’ve discussed at the top is specifically tailored to the game of Tic-Tac-Toe. However we can generalize this approach to make it work with other board games such as Chess or Go. Let’s take a look at Minimax, a tree search algorithm which abstracts our Tic-Tac-Toe strategy so that we can apply it to various other 2 player board games.
## The Minimax Algorithm
Given that we’ve built up an intuition for tree search algorithms let’s switch our focus from simple games such as Tic-Tac-Toe to more complex games such as Chess.
Before we dive in let’s briefly recap the properties of a Chess game. Chess is a 2 player deterministic game of perfect information. Sound confusing? Let’s unpack it:
In Chess, 2 players (Black and White) play against each other. Every move which is performed is ensured to be “fulfilled” with no randomness involved (the game doesn’t use any random elements such as a die). During gameplay every player can observe the whole game state. There’s no hidden information, hence everyone has perfect information about the whole game at any given time.
Thanks to those properties we can always compute which player is currently ahead and which one is behind. There are several different ways to do this for the game of Chess. One approach to evaluate the current game state is to add up all the remaining white pieces on the board and subtract all the remaining black ones. Doing this will produce a single value where a large value favors white and a small value favors black. This type of function is called an evaluation function.
Based on this evaluation function we can now define the overall goal during the game for each player individually. White tries to maximize this objective while black tries to minimize it.
Let’s pretend that we’re deep in an ongoing Chess game. We’re player white and have already played a couple of clever moves, resulting in a large number computed by our evaluation function. It’s our turn right now but we’re stuck. Which of the possible moves is the best one we can play?
We’ll solve this problem with the same approach we already encountered in our Tic-Tac-Toe gameplay example. We build up a tree of potential moves which could be performed based on the game state we’re in. To keep things simple we pretend that there are only 2 possible moves we can play (in Chess there are on average ~30 different options for every given game state). We start with a (white) root node which represents the current state. Starting from there we’re branching out 2 (black) child nodes which represent the game state we’re in after taking one of the 2 possible moves. From these 2 child nodes we’re again branching out 2 separate (white) child nodes. Each one of those represents the game state we’re in after taking one of the 2 possible moves we could play from the black node. This branching out of nodes goes on and on until we’ve reached the end of the game or hit a predefined maximum tree depth.
The resulting tree looks something like this:
Given that we’re at the end of the tree we can now compute the game outcome for each end state with our evaluation function:
With this information we now know the game outcome we can expect when we take all the outlined moves starting from the root node and ending at the last node where we calculated the game evaluation. Since we’re player white it seems like the best move to pick is the one which will set us up to eventually end in the game state with the highest outcome our evaluation function calculated.
While this is true there’s one problem. There’s still the black player involved and we cannot directly manipulate what move she’ll pick. If we cannot manipulate this why don’t we estimate what the black player will likely do based on our evaluation function? As a white player we always try to maximize our outcome. The black player always tries to minimize the outcome. With this knowledge we can now traverse back through our game tree and compute the values for all our individual tree nodes step by step.
White tries to maximize the outcome:
While black wants to minimize it:
Once done we can now pick the next move based on the evaluation values we’ve just computed. In our case we pick the next possible move which maximizes our outcome:
What we’ve just learned is the general procedure of the so-called Minimax algorithm. The Minimax algorithm got its name from the fact that one player wants to Mini-mize the outcome while the other tries to Max-imize it.
### Code
def minimax(state, max_depth, is_player_minimizer):
if max_depth == 0 or state.is_end_state():
# We're at the end. Time to evaluate the state we're in
return evaluation_function(state)
# Is the current player the minimizer?
if is_player_minimizer:
value = -math.inf
for move in state.possible_moves():
evaluation = minimax(move, max_depth - 1, False)
min = min(value, evaluation)
return value
# Or the maximizer?
value = math.inf
for move in state.possible_moves():
evaluation = minimax(move, max_depth - 1, True)
max = max(value, evaluation)
return value
## Search space reduction with pruning
Minimax is a simple and elegant tree search algorithm. Given enough compute resources it will always find the optimal next move to play.
But there’s a problem. While this algorithm works flawlessly with simplistic games such as Tic-Tac-Toe, it’s computationally infeasible to implement it for strategically more involved games such as Chess. The reason for this is the so-called tree branching factor. We’ve already briefly touched on that concept before but let’s take a second look at it.
In our example above we've artificially restricted the potential moves one can play to 2 to keep the tree representation simple and easy to reason about. However the reality is that there are usually more than 2 possible next moves. On average there are ~30 moves a Chess player can play in any given game state. This means that every single node in the tree will have approximately 30 different children. This is called the width of the tree. We denote the trees width as $$w$$.
But there's more. It takes roughly ~85 consecutive turns to finish a game of Chess. Translating this to our tree means that it will have an average depth of 85. We denote the trees depth as $$d$$.
Given $$w$$ and $$d$$ we can define the formula $$w^d$$ which will show us how many different positions we have to evaluate on average.
Plugging in the numbers for Chess we get $$30^{85}$$. Taking the Go board game as an example which has a width $$w$$ of ~250 and an average depth $$d$$ of ~150 we get $$250^{150}$$. I encourage you to type those numbers into your calculator and hit enter. Needless to say that current generation computers and even large scale distributed systems will take "forever" to crunch through all those computations.
Does this mean that Minimax can only be used for games such as Tic-Tac-Toe? Absolutely not. We can apply some clever tricks to optimize the structure of our search tree.
Generally speaking we can reduce the search trees width and depth by pruning individual nodes and branches from it. Let's see how this works in practice.
### Alpha-Beta Pruning
Recall that Minimax is built around the premise that one player tries to maximize the outcome of the game based on the evaluation function while the other one tries to minimize it.
This gameplay behavior is directly translated into our search tree. During traversal from the bottom to the root node we always picked the respective “best” move for any given player. In our case the white player always picked the maximum value while the black player picked the minimum value:
Looking at our tree above we can exploit this behavior to optimize it. Here’s how:
While walking through the potential moves we can play given the current game state we’re in we should build our tree in a depth-first fashion. This means that we should start at one node and expand it by playing the game all the way to the end before we back up and pick the next node we want to explore:
Following this procedure allows us to identify moves which will never be played early on. After all, one player maximizes the outcome while the other minimizes it. The part of the search tree where a player would end up in a worse situation based on the evaluation function can be entirely removed from the list of nodes we want to expand and explore. We prune those nodes from our search tree and therefore reduce its width.
The larger the branching factor of the tree, the higher the amount of computations we can potentially save!
Assuming we can reduce the width by an average of 10 we would end up with $$w^d = (30 - 10)^{85} = 20^{85}$$ computations we have to perform. That's already a huge win.
This technique of pruning parts of the search tree which will never be considered during gameplay is called Alpha-Beta pruning. Alpha-Beta pruning got its name from the parameters $$\alpha$$ and $$\beta$$ which are used to keep track of the best score either player can achieve while walking the tree.
### Code
def minimax(state, max_depth, is_player_minimizer, alpha, beta):
if max_depth == 0 or state.is_end_state():
return evaluation_function(state)
if is_player_minimizer:
value = -math.inf
for move in state.possible_moves():
evaluation = minimax(move, max_depth - 1, False, alpha , beta)
min = min(value, evaluation)
# Keeping track of our current best score
beta = min(beta, evaluation)
if beta <= alpha:
break
return value
value = math.inf
for move in state.possible_moves():
evaluation = minimax(move, max_depth - 1, True, alpha, beta)
max = max(value, evaluation)
# Keeping track of our current best score
alpha = max(alpha, evaluation)
if beta <= alpha:
break
return value
Using Alpha-Beta pruning to reduce the trees width helps us utilize the Minimax algorithm in games with large branching factors which were previously considered as computationally too expensive.
In fact Deep Blue, the Chess computer developed by IBM which defeated the Chess world champion Garry Kasparov in 1997 heavily utilized parallelized Alpha-Beta based search algorithms.
It seems like Minimax combined with Alpha-Beta pruning is enough to build sophisticated game AIs. But there’s one major problem which can render such techniques useless. It’s the problem of defining a robust and reasonable evaluation function. Recall that in Chess our evaluation function added up all the white pieces on the board and subtracted all the black ones. This resulted in high values when white had an edge and in low values when the situation was favorable for black. While this function is a good baseline and is definitely worthwhile to experiment with there are usually more complexities and subtleties one needs to incorporate to come up with a sound evaluation function.
Simple evaluation metrics are easy to fool and exploit once the underlying internals are surfaced. This is especially true for more complex games such as Go. Engineering an evaluation function which is complex enough to capture the majority of the necessary game information requires a lot of thought and interdisciplinary domain expertise in Software Engineering, Math, Psychology and the game at hand.
Isn’t there a universally applicable evaluation function we could leverage for all games, no matter how simple or complex they are?
Yes, there is! And it’s called randomness. With randomness we let chance be our guide to figure out which next move might be the best one to pick.
In the following we’ll explore the so-called Monte Carlo Tree Search (MCTS) algorithm which heavily relies on randomness (the name “Monte Carlo” stems from the gambling district in Monte Carlo) as a core component for value approximations.
As the name implies, MCTS also builds up a game tree and does computations on it to find the path of the highest potential outcome. But there’s a slight difference in how this tree is constructed.
Let’s once again pretend that we’re playing Chess as player white. We’ve already played for a couple of rounds and it’s on us again to pick the next move we’d like to play. Additionally let’s pretend that we’re not aware of any evaluation function we could leverage to compute the value of each possible move. Is there any way we could still figure out which move might put us into a position where we could win at the end?
As it turns out there’s a really simple approach we can take to figure this out. Why don’t we let both player play dozens of random games starting from the state we’re currently in? While this might sound counterintuitive it make sense if you think about it. If both player start in the given game state, play thousands of random games and player white wins 80% of the time, then there must be something about the state which gives white an advantage. What we’re doing here is basically exploiting the Law of large numbers (LLN) to find the “true” game outcome for every potential move we can play.
The following description will outline how the MCTS algorithm works in detail. For the sake of simplicity we again focus solely on 2 playable moves in any given state (as we’ve already discovered there are on average ~30 different moves we can play in Chess).
Before we move on we need to get some minor definitions out of the way. In MCTS we keep track of 2 different parameters for every single node in our tree. We call those parameters $$t$$ and $$n$$. $$t$$ stands for "total" and represents the total value of that node. $$n$$ is the "number of visits" which reflects the number of times we've visited this node while walking through the tree. When creating a new node we always initialize both parameters with the value 0.
In addition to the 2 new parameters we store for each node, there's the so-called "Upper Confidence Bound 1" (UCT) formula which looks like this
$x_i + C\sqrt{\frac{\ln(N)}{n_i}}$
This formula basically helps us in deciding which upcoming node and therefore potential game move we should pick to start our random game series (called "rollout") from. In the formula $$x_i$$ represents the average value of the game state we're working with, $$C$$ is a constant called "temperature" we need to define manually (we just set it to 1.5 in our example here. More on that later), $$N$$ represents the parent node visits and $$n_i$$ represents the current nodes visits. When using this formula on candidate nodes to decide which one to explore further, we're always interested in the largest result.
Don't be intimidated by the Math and just note that this formula exists and will be useful for us while working with out tree. We'll get into more details about the usage of it while walking through our tree.
With this out of the way it's time apply MCTS to find the best move we can play.
We start with the same root node of the tree we're already familiar with. This root node is our start point and reflects the current game state. Based on this node we branch off our 2 child nodes:
The first thing we need to do is to use the UCT formula from above and compute the results for both child nodes. As it turns out we need to plug in 0 for almost every single variable in our UCT formula since we haven't done anything with our tree and its nodes yet. This will result in $$\infty$$ for both calculations.
$S_1 = 0 + 1.5\sqrt{\frac{\ln(0)}{0.0001}} = \infty$
$S_1 = 0 + 1.5\sqrt{\frac{\ln(0)}{0.0001}} = \infty$
We've replaced the 0 in the denominator with a very small number because division by zero is not defined
Given this we're free to choose which node we want to explore further. We go ahead with the leftmost node and perform our rollout phase which means that we play dozens of random games starting with this game state.
Once done we get a result for this specific rollout (in our case the percentage of wins for player white). The next thing we need to do is to propagate this result up the tree until we reach the root node. While doing this we update both $$t$$ and $$n$$ with the respective values for every node we encounter. Once done our tree looks like this:
Next up we start at our root node again. Once again we use the UCT formula, plug in our numbers and compute its score for both nodes:
$S_1 = 30 + 1.5\sqrt{\frac{\ln(1)}{1}} = 30$
$S_2 = 0 + 1.5\sqrt{\frac{\ln(0)}{0.0001}} = \infty$
Given that we always pick the node with the highest value we'll now explore the rightmost one. Once again we perform our rollout based on the move this node proposes and collect the end result after we've finished all our random games.
The last thing we need to do is to propagate this result up until we reach the root of the tree. While doing this we update the parameters of every node we encounter.
We've now successfully explored 2 child nodes in our tree. You might've guessed it already. We'll start again at our root node and calculate every child nodes UCT score to determine the node we should further explore. In doing this we get the following values:
$S_1 = 30 + 1.5\sqrt{\frac{\ln(2)}{1}} \approx 31.25$
$S_2 = 20 + 1.5\sqrt{\frac{\ln(2)}{1}} \approx 21.25$
The largest value is the one we've computed for the leftmost node so we decide to explore that node further.
Given that this node has no child nodes we add two new nodes which represent the potential moves we can play to the tree. We initialize both of their parameters ($$t$$ and $$n$$) with 0.
Now we need to decide which one of those two nodes we should explore further. And you're right. We use the UCT formula to calculate their values. Given that both have $$t$$ and $$n$$ values of zero they're both $$\infty$$ so we decide to pick the leftmost node. Once again we do a rollout, retrieve the value of those games and propagate this value up to the tree until we reach the trees root node, updating all the node parameters along the way.
The next iteration will once again start at the root node where we use the UCT formula to decide which child node we want to explore further. Since we can see a pattern here and I don't want to bore you I'm not going to describe the upcoming steps in great detail. What we'll be doing is following the exact same procedure we've used above which can be summarized as follows:
1. Start at the root node and use the UCT formula to calculate the score for every child node
2. Pick the child node for which you've computed the highest UCT score
3. Check if the child has already been visited
• If not, do a rollout
• If yes, determine the potential next states from there
• Use the UCT formula to decide which child node to pick
• Do a rollout
1. Propagate the result back through the tree until you reach the root node
We iterate over this algorithm until we run out of time or reached a predefined threshold value of visits, depth or iterations. Once this happens we evaluate the current state of our tree and pick the child node(s) which maximize the value $$t$$. Thanks to dozens of games we've played and the Law of large numbers we can be very certain this move is the best one we can possibly play.
That's all there is. We've just learned, applied and understood Monte Carlo Tree Search!
You might agree that it seems like MCTS is very compute intensive since you have to run through thousands of random games. This is definitely true and we need to be very clever as to where we should invest our resources to find the most promising path in our tree. We can control this behavior with the aforementioned "temperature" parameter $$C$$ in our UCT formula. With this parameter we balance the trade-off between "exploration vs. exploitation".
A large $$C$$ value puts us into "exploration" mode. We'll spend more time visiting least-explored nodes. A small value for $$C$$ puts us into "exploitation" mode where we'll revisit already explored nodes to gather more information about them.
Given the simplicity and applicability due to the exploitation of randomness, MCTS is a widely used game tree search algorithm. DeepMind extended MCTS with Deep Neural Networks to optimize its performance in finding the best Go moves to play. The resulting Game AI was so strong that it reached superhuman level performance and defeated the Go World Champion Lee Sedol 4-1.
## Conclusion
In this blog post we’ve looked into 2 different tree search algorithms which can be used to build sophisticated Game AIs.
While Minimax combined with Alpha-Beta pruning is a solid solution to approach games where an evaluation function to estimate the game outcome can easily be defined, Monte Carlo Tree Search (MCTS) is a universally applicable solution given that no evaluation function is necessary due to its reliance on randomness.
Raw Minimax and MCTS are only the start and can easily be extended and modified to work in more complex environments. DeepMind cleverly combined MCTS with Deep Neural Networks to predict Go game moves whereas IBMextended Alpha-Beta tree search to compute the best possible Chess moves to play.
I hope that this introduction to Game AI algorithms sparked your interest in Artificial Intelligence and helps you understand the underlying mechanics you’ll encounter the next time you pick up a board game on your computer.
|
2020-09-27 15:37:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3755399286746979, "perplexity": 636.9684203301895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400283990.75/warc/CC-MAIN-20200927152349-20200927182349-00396.warc.gz"}
|
https://stats.stackexchange.com/questions/449097/limit-of-t-distribution-as-n-goes-to-infinity
|
Limit of $t$-distribution as $n$ goes to infinity
I found in my intro to stats textbook that $$t$$-distribution approaches the standard normal as $$n$$ goes to infinity. The textbook gives the density for $$t$$-distribution as follows, $$f(t)=\frac{\Gamma\left(\frac{n+1}{2}\right)}{\sqrt{n\pi}\Gamma\left(\frac{n}{2}\right)}\left(1+\frac{t^2}{n}\right)^{-\frac{n+1}{2}}$$
I think it might be possible to show that this density converges (uniformly) to the density of normal as $$n$$ goes to infinity. Given $$\lim_{n\to \infty}\left(1+\frac{t^2}{n}\right)^{-\frac{n+1}{2}}=e^{-\frac{t^2}{2}}$$, it would be great if we can show $$\frac{\Gamma\left(\frac{n+1}{2}\right)}{\Gamma\left(\frac{n}{2}\right)}\to \frac{\sqrt{n}}{2}$$ as $$n\to \infty$$, yet I am stuck here. Can someone point out how to proceed or an alternative way to show that $$t$$-distribution converges to normal as $$n\to \infty$$. Thanks!
• why do you this holds true? – Aksakal Feb 12 at 4:47
• An alternative way begins with the observation that the Student t is a variance mixture of Gaussians. You needn't worry about the Gamma factors, because they simply Normalize the distribution, so you have already answered your question. (Use logs and Taylor's Theorem to demonstrate uniform convergence.) BTW, that Gamma ratio is incorrect. If it were right, then asymptotically $n/2=\Gamma(n/2+1)/\Gamma(n/2)$ would behave like $\sqrt{n+1}/2\times\sqrt{n} /2=\sqrt{n^2+n}/4\approx n/4.$ Evidently $\sqrt{n}/2$ should be $\sqrt{n/2}.$ You can apply Stirling's formula if you aren't convinced. – whuber Feb 12 at 4:53
• Here's the thread on the variance mixture expression: stats.stackexchange.com/questions/52906. – whuber Feb 12 at 4:54
• You can also show this using Slutsky's theorem: math.stackexchange.com/q/3240536/321264. – StubbornAtom Feb 12 at 15:28
Stirling's approximation gives $$\Gamma(z) = \sqrt{\frac{2\pi}{z}}\,{\left(\frac{z}{e}\right)}^z \left(1 + O\left(\tfrac{1}{z}\right)\right)$$ so
$$\frac{\Gamma(\frac{n+1}{2})}{\Gamma(\frac{n}{2})} = \dfrac{\sqrt{\frac{2\pi}{\frac{n+1}{2}}}\,{\left(\frac{\frac{n+1}{2}}{e}\right)}^{\frac{n+1}{2}}}{\sqrt{\frac{2\pi}{\frac{n}{2}}}\,{\left(\frac{\frac{n}{2}}{e}\right)}^{\frac{n}{2}}}\left(1 + O\left(\tfrac{1}{n}\right)\right)\\= {\sqrt{\frac{\frac{n+1}{2}}{e}}}\left(1+\frac1n\right)^{\frac{n}{2}}\left(1 + O\left(\tfrac{1}{n}\right)\right) \\= \sqrt{\frac{n}{2}} \left(1 + O\left(\tfrac{1}{n}\right)\right)\\ \to \sqrt{\frac{n}{2}}$$ and you may have a slight typo in your question
In fact when considering limits as $$n\to \infty$$, you should not have $$n$$ in the solution; instead you can say the ratio tends to $$1$$ and it turns out here that the difference tends to $$0$$. Another point is that $$\sqrt{\frac{n}{2}-\frac14}$$ is a better approximation, in that not only does the difference tend to $$0$$, but so too does the difference of the squares.
A generalization uncovers a fundamental idea. One nice thing about it is how it circumvents calculation altogether: the Gamma functions don't play any role and, in fact, neither do the specific expressions for the Normal and Chi-squared pdfs.
Recall that the Student $$t$$ distribution with $$\nu$$ degrees of freedom originates (both historically, pedagogically, and from a basic statistical standpoint) as the ratio
$$t_\nu = \frac{Z}{\sqrt{S_\nu^2/\nu}}$$
where $$Z$$ has a standard Normal distribution and $$S^2$$ is a random variable independent of $$Z$$ with a $$\chi^2(\nu)$$ distribution. (This characterization suffices to derive the probability density function proportional to
$$f_\nu(t) \propto \left(1 + \frac{t^2}{\nu}\right)^{-(\nu+1)/2}$$
for $$\nu \in \{1,2,3,\ldots\};$$ this is then generalized by allowing $$\nu$$ to be any positive real number. However, we will not need this detail; I present it only to make an explicit connection with how the question is framed.)
Generalization Part 1
Let $$Z$$ instead be a random variable with any distribution. Later I will want to work with its logarithm, so for this purpose use the indicator function $$\mathcal I$$ to split $$Z$$ into its negative, zero, and positive parts:
$$Z = -\mathcal{I}(Z\lt 0)(-Z) + \mathcal{I}(Z=0)Z + \mathcal{I}(Z\gt 0)Z = -Z_{-} + Z_0 + Z_{+}.$$
The fraction $$t_\nu$$ analogously splits into three parts by dividing each term by $$\sqrt{S_\nu^2/\nu}.$$ The part with numerator $$Z_0$$ is identically $$0$$ and the other parts are expressed as ratios with strictly positive random variables $$Z_{-}$$ and $$Z_{+}$$ in their numerators. These are the ratios we need to analyze.
Generalization Part 2
Let us suppose $$S_\nu^2$$ is a sequence of positive random variables that, for sufficiently large $$\nu,$$ have finite variances $$v^2_\nu$$ and (therefore) have finite means $$m_\nu$$ such that
$$\lim_{\nu\to\infty} \frac{m_\nu}{\nu}=1$$
and
$$\lim_{\nu\to\infty} \frac{v^2_\nu}{\nu^2} = 0.$$
(Both are well-known, easily-established properties of Chi-squared distributions.) This is just a specific way of stipulating that $$S_\nu^2$$ tends to get more and more concentrated (relative to its location) around the value $$\nu$$ as $$\nu$$ increases, but equivalently it shows that $$S_\nu^2/\nu$$ tends to $$1$$ while its variance tends to $$0.$$ Chebyshev's Inequality then implies an arbitrarily large amount of the probability of $$S_\nu^2/\nu$$ eventually becomes concentrated in arbitrarily small neighborhoods of $$1.$$ That in turn implies an arbitrarily large amount of the probability of $$\varphi_\nu=\log\left(S_\nu^2/\nu\right)$$ becomes concentrated in arbitrarily small neighborhoods of $$0.$$
In mathematical analysis, a sequence like $$(\varphi_\nu)$$ is sometimes called a "mollifier" (provided $$\varphi_\nu$$ is smooth and compactly supported). The key idea is that adding a mollifier to another random variable has less and less of an effect, converging (almost surely) to that other variable in the limit. That result does not depend on the smoothness of the mollifying functions and it only really requires that their supports constrict down to zero. However, since our $$\varphi_\nu$$ do not have compact support, the usual conclusion that convergence occurs almost everywhere (with respect to Lebesgue measure) has to be weakened to convergence in probability.
Analysis
Let $$W$$ represent either $$Z_{+}$$ or $$Z_{-}$$ and let $$T_\nu = S_\nu^2/\nu.$$ Because $$W$$ and $$T_\nu$$ are both positive, we may take logarithms:
$$\log\left(\frac{W}{\sqrt{T_\nu}}\right) = \log(W) + \left(- \frac{1}{2}\log(T_\nu)\right).$$
The factor of $$-1/2$$ does not affect the mollifying properties of the sequence of $$\varphi_\nu = \log(T_\nu).$$ Thus, the sequence $$\log(W/\sqrt{T_\nu})$$ converges in probability to $$\log(W).$$ Since the $$\log$$ is continuous, we see that $$W/\sqrt{T_\nu}$$ converges to $$W.$$
Obviously when $$W$$ is an atom at $$0,$$ the sequence $$W/\sqrt{T_\nu}$$ is constantly $$0.$$
Finally, now that we have seen that all three components of $$Z/\sqrt{T_\nu}$$ converge to the corresponding components of $$Z,$$ we conclude
In the generalized setting, $$t_\nu=\frac{Z}{\sqrt{S_\nu^2/\nu}}$$ converges in probability to $$Z.$$
If, in addition, $$Z$$ and $$S_\nu^2$$ (for each $$\nu,$$ at least eventually for large $$\nu$$) have continuous distributions with bounded densities (as in the case of Normal and Chi-squared distributions in the Student $$t$$ setting), it is now straightforward to show the sequence of distribution functions of $$t_\nu$$ converges uniformly to the distribution function of $$Z.$$ (The boundedness allows us to conclude that the convergence is uniform.)
While this is not as elementary as Stirling's approximation, the pointwise convergence of the density can be shown using dominated convergence theorem.
The density of a t-distribution with $$n$$ degrees of freedom is of the form $$f_n(x)=c_n\cdot\left(1+\frac{x^2}{n}\right)^{-(n+1)/2}\quad,\,x\in\mathbb R$$
Let $$g_n(x)=\left(1+\frac{x^2}{n}\right)^{-(n+1)/2}$$, so that $$g_n(x)\to e^{-x^2/2}$$ as $$n\to \infty$$.
So just remains to show that $$c_n\to \frac1{\sqrt{2\pi}}$$ as $$n\to\infty$$.
Now, $$\left(1+\frac{x^2}{n}\right)^{(n+1)/2}\ge \left(1+(n+1)\frac{x^2}{n}+\frac{n+1}{2n}x^4\right)^{1/2}\ge \left(1+\frac{x^4}{2}\right)^{1/2}$$
This implies $$|g_n(x)|\le \left(1+\frac{x^4}{2}\right)^{-1/2}\,,$$
where $$\int_{-\infty}^\infty \left(1+\frac{x^4}{2}\right)^{-1/2}\,dx<\infty$$
So by dominated convergence theorem,
$$\lim_{n\to\infty}\int_{-\infty}^\infty g_n(x)\,dx=\int_{-\infty}^\infty \lim_{n\to\infty}g_n(x)\,dx=\int_{-\infty}^\infty e^{-x^2/2}\,dx=\sqrt{2\pi}$$
Finally, as $$\int_{-\infty}^\infty f_n(x)\,dx=c_n\int_{-\infty}^\infty g_n(x)\,dx=1$$, taking limit on both sides yields
$$\lim_{n\to\infty}c_n\cdot\sqrt{2\pi}=1$$
The nice thing about this approach is that we don't need to know what $$c_n$$ is to determine its limit.
Yet another way to derive this result is by using Slutsky's theorem, as shown here.
An easy, intuitive way is to recognize that the noncentral scaled t-distribution with n degrees of freedom is the posterior predictive of the normal model based on n data points. (I think this is essentially its origin, and gives a common sense interpretation of the t-test.) As n goes to infinity, the model becomes "perfect", and must converge to a normal distribution.
There are many ways you can establish that the T-distribution approaches the normal distribution in the limit. For the direct method you are using, you can find asymptotic expansions for the ratio of gamma functions are analysed in detail in Tricomi and Erdélyi (1951). The simplest expansion comes through application of Stirling's inequality, to obtain the general result (p. 133):
$$\frac{\Gamma(z+\alpha)}{\Gamma(z+\beta)} = z^{\alpha-\beta} \Big[ 1 + \frac{(\alpha - \beta) (\alpha + \beta - 1)}{2z} + \mathcal{O}(|z|^{-2}) \Big].$$
Taking $$\beta = 0$$ gives the simplified asymptotic form:
$$\frac{\Gamma(z+\alpha)}{\Gamma(z)} = z^{\alpha} \Big[ 1 + \frac{\alpha (\alpha - 1)}{2z} + \mathcal{O}(|z|^{-2}) \Big].$$
To obtain a form for the function of interest we can take $$z = \tfrac{n}{2}$$ and $$\alpha = \tfrac{1}{2}$$ to obtain:
$$H(n) \equiv \frac{\Gamma(\tfrac{n+1}{2})}{\Gamma(\tfrac{n}{2})} = \sqrt{\frac{n}{2}} \Big[ 1 - \frac{1}{4n} + \mathcal{O}(n^{-2}) \Big].$$
We therefore have the desire limit:
$$\lim_{n \rightarrow \infty} \frac{\Gamma(\tfrac{n+1}{2})}{\sqrt{n \pi} \ \Gamma(\tfrac{n}{2})} = \lim_{n \rightarrow \infty} \frac{1}{\sqrt{2 \pi}} \Big[ 1 - \frac{1}{4n} + \mathcal{O}(n^{-2}) \Big] = \frac{1}{\sqrt{2 \pi}}.$$
|
2020-06-03 04:19:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 107, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9458598494529724, "perplexity": 175.21277237591423}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347428990.62/warc/CC-MAIN-20200603015534-20200603045534-00185.warc.gz"}
|
https://techwhiff.com/learn/im-not-sure-about-my-selections-and-numbers-that/338806
|
Im not sure about my selections and numbers that I plugged in, just ignore please Closing...
Question:
im not sure about my selections and numbers that I plugged in, just ignore please
Someone please show how to compute the integral of b over the given region. The answer...
Someone please show how to compute the integral of b over the given region. The answer is in blue but I am not sure how to get there. 12. D = {0 <<<5,0 < < cos } (a) Sketch the region D T EEN -1 (b) Compute the integral of the function f(x, y) = sin r over the region D. NI-...
Income Statement 2014 2015 2016 2017 Sales/Revenue 55.87B 55.36B 59.39B 62.76B Cost of Goods Sold (COGS)...
Income Statement 2014 2015 2016 2017 Sales/Revenue 55.87B 55.36B 59.39B 62.76B Cost of Goods Sold (COGS) incl. D&A 20.52B 20.65B 23.43B 23.8B COGS excluding D&A 11.97B 11.94B 15.64B 15.68B Depreciation & Amortization Expense 8.55B 8.71B 7.79B 8.13B Depreciation...
Rank the following in order of increasing acid strength (1=weakest acid and 4=strongest acid) 4. (4...
Rank the following in order of increasing acid strength (1=weakest acid and 4=strongest acid) 4. (4 pts) Rank the following structures in order of increasing acid strength (1 = weakest acid and 4= strongest acid): HN NH OH OH We were unable to transcribe this image...
7 The fluorescence of each of a series of acidic solutions of quinine was deter mined five times....
7 The fluorescence of each of a series of acidic solutions of quinine was deter mined five times. The results are given below. Concentration, ng m 01020 304050 Fluorescence intensity 422 4460 75 104 32046 63 81109 42145 6079107 5 2244 6378 101 4 2144 6377105 (arbitrary units) Determine the slopes an...
12. (10 points) A college basketball player makes 90% of his free throws. Assuming free-throw attempts...
12. (10 points) A college basketball player makes 90% of his free throws. Assuming free-throw attempts are independent. Over the course of the season, he will attempt 100 free throws. a) Use binomial distribution to find the exact probability that the number of free throws he makes is between 85 and...
Determine the specific entropy (kJ/kg-K) of superheated water using the table below. The pressure and temperature...
Determine the specific entropy (kJ/kg-K) of superheated water using the table below. The pressure and temperature are given as P = 398.19 kPa and T = 260°C, respectively. Note: Give your answer to four decimal places. Properties of Superheated Water Vapor Tv u hs v u hs °C m/kg kJ/kg kJ/kg k...
Kubin Company’s relevant range of production is 23,000 to 27,500 units. When it produces and sells 25,250 units, its average costs per unit are as follows: Average Cost per Unit Direct materials $8.30 Direct labor$ 5.30 Variable manufacturing overhead \$ 2.80 Fixed m...
|
2023-02-02 14:26:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5918917059898376, "perplexity": 3249.2839472211263}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500028.12/warc/CC-MAIN-20230202133541-20230202163541-00502.warc.gz"}
|
https://web2.0calc.com/questions/homework-help-me-understand
|
+0
# homework help me understand
-1
24
1
Simplify each expression when x = 2 and y = 3
7x + 3y -48
Im so confused .
Apr 29, 2022
#1
+1351
+2
Substitute it like this: $$(7 \times 2) +(3 \times 3)-48$$
Apr 29, 2022
Substitute it like this: $$(7 \times 2) +(3 \times 3)-48$$
|
2022-05-19 01:36:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9880094528198242, "perplexity": 4989.879545638153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522741.25/warc/CC-MAIN-20220519010618-20220519040618-00331.warc.gz"}
|
https://hal-insu.archives-ouvertes.fr/insu-02348185
|
Service interruption on Monday 11 July from 12:30 to 13:00: all the sites of the CCSD (HAL, Epiciences, SciencesConf, AureHAL) will be inaccessible (network hardware connection).
# The interstellar object 'Oumuamua as a fractal dust aggregate
1 PoreLab [Oslo]
Department of Physics [Oslo], NTNU - Norwegian University of Science and Technology [Oslo]
Abstract : The first known interstellar object 'Oumuamua exhibited a nongravitational acceleration that appeared inconsistent with cometary outgassing, leaving radiation pressure as the most likely force. Bar the alien lightsail hypothesis, an ultra-low density due to a fractal structure might also explain the acceleration of 'Oumuamua by radiation pressure (Moro-Martin 2019). In this paper we report a decrease in 'Oumuamua's rotation period based on ground-based observations, and show that this spin-down can be explained by the YORP effect if 'Oumuamua is indeed a fractal body with the ultra-low density of $10^{-2}$ kg m$^{-3}$. We also investigate the mechanical consequences of 'Oumuamua as a fractal body subjected to rotational and tidal forces, and show that a fractal structure can survive these mechanical forces.
Document type :
Preprints, Working Papers, ...
Domain :
https://hal-insu.archives-ouvertes.fr/insu-02348185
Contributor : Renaud Toussaint Connect in order to contact the contributor
Submitted on : Tuesday, November 5, 2019 - 12:46:25 PM
Last modification on : Wednesday, November 3, 2021 - 6:49:26 AM
### Identifiers
• HAL Id : insu-02348185, version 1
• ARXIV : 1910.07135
### Citation
Eirik G. Flekkøy, Jane X. Luu, Renaud Toussaint. The interstellar object 'Oumuamua as a fractal dust aggregate. 2019. ⟨insu-02348185⟩
Record views
|
2022-07-06 01:01:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6140691637992859, "perplexity": 9413.941715927556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104655865.86/warc/CC-MAIN-20220705235755-20220706025755-00710.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/jimo.2016065
|
# American Institute of Mathematical Sciences
April 2017, 13(2): 1125-1147. doi: 10.3934/jimo.2016065
## Pricing and remanufacturing decisions for two substitutable products with a common retailer
1 School of Science, Tianjin Polytechnic University, Tianjin 300387, China 2 School of Management, Tianjin University of Technology, Tianjin 300384, China 3 Business School, Nankai University, Tianjin 300071, China
* Corresponding author: Jie Wei
Received June 2015 Published October 2016
Fund Project: The authors wish to express their sincerest thanks to the editors and anonymous referees for their constructive comments and suggestions on the paper. We gratefully acknowledge the support of (ⅰ) National Natural Science Foundation of China (NSFC), Research Fund Nos. 71301116,71302112 for J. Zhao; (ⅱ) National Natural Science Foundation of China, Research Fund Nos. 71371186,71202162 for J. Wei; (ⅲ) National Natural Science Foundation of China (NSFC), Research Fund No. 71372100, and the Major Program of the National Social Science Fund of China(Grant No. 13 & ZD147) for Y.J., Li
This paper studies pricing and remanufacturing decisions for two substitutable products in a supply chain with two manufacturers and one common retailer. The two manufacturers produce two substitutable products and sell them to the retailer. Specifically, the first manufacturer is a traditional manufacturer who produces the new product directly from raw material, while the second manufacturer has incorporated a remanufacturing process for used product into his original production system, so that he can manufacture a new product directly from raw material, or remanufacture part or whole of a returned unit into a new product. We establish seven game models by considering the chain members' horizontal and vertical competitions, and obtain the corresponding closed-form expressions for equilibrium solution. Then, the equilibrium characteristics with respect to the second manufacturer's remanufacturing decision and all channel members' pricing decisions are explored, the sensitivity analysis of equilibrium solution is conducted for some model parameters, and the maximal profits and equilibrium solutions obtained in different game models are compared by numerical analyses. Based on these results, some interesting and valuable economic and managerial insights are established.
Citation: Jing Zhao, Jie Wei, Yongjian Li. Pricing and remanufacturing decisions for two substitutable products with a common retailer. Journal of Industrial & Management Optimization, 2017, 13 (2) : 1125-1147. doi: 10.3934/jimo.2016065
##### References:
[1] S. Choi, Price competition in a channel structure with common retailer, Marketing Science, 10 (1991), 271-296. doi: 10.1287/mksc.10.4.271. Google Scholar [2] S. Choi, Price competition in a duopoly common retailer channel, Journal of Retailing, 72 (1996), 117-134. doi: 10.1016/S0022-4359(96)90010-X. Google Scholar [3] T. Choi, Y. Li and L. Xu, Channel leadership, performance and coordination in closed loop supply chains, International Journal of Production Economics, 146 (2013), 371-380. doi: 10.1016/j.ijpe.2013.08.002. Google Scholar [4] X. Hong, X. Wang, D. Wang and H. Zhang, Decision models of closed-loop supply chain with remanufacturing under hybrid dual-channel collection, The International Journal of Advanced Manufacturing Technology, 68 (2013), 1851-1865. doi: 10.1007/s00170-013-4982-1. Google Scholar [5] E. Lee and R. Staelin, Vertical strategic interaction: Implications for channel pricing strategy, Marketing Science, 16 (1997), 185-207. doi: 10.1287/mksc.16.3.185. Google Scholar [6] X. Li, Y. Li and S. Saghafian, A Hybrid Manufacturing/Remanufacturing System with Random Remanufacturing Yield and Market-Driven Product Acquisition, IEEE Transactions on Engineering Management, 60 (2013), 424-437. doi: 10.1109/TEM.2012.2215873. Google Scholar [7] T. W. McGuire and R. Staelin, An industry equilibrium analysis of downstream vertical integration, Marketing Science, 2 (1983), 161-192. Google Scholar [8] S. Mitraa and S. Webster, Competition in remanufacturing and the effects of government subsidies, International Journal of Production Economics, 111 (2008), 287-298. doi: 10.1016/j.ijpe.2007.02.042. Google Scholar [9] B. Mishra and S. Raghunathan, Retail-vs. vendor-managed inventory and brand competition, Management Science, 50 (2004), 445-457. Google Scholar [10] S. Netessine and N. Rudi, Centralized and competitive inventory models with demand substitution, Operations Research, 51 (2003), 329-335. doi: 10.1287/opre.51.2.329.12788. Google Scholar [11] B. Pasternack and Z. Drezner, Optimal inventory policies for substitutable commodities with stochastic demand, Naval Research Logistics, 38 (1991), 221-240. doi: 10.1002/1520-6750(199104)38:2<221::AID-NAV3220380208>3.0.CO;2-7. Google Scholar [12] R. C. Savaskan, S. Bhattacharya and L. N. Van Wassenhove, Closed-loop supply chain models with product remanufacturing, Management Science, 50 (2004), 239-252. doi: 10.1287/mnsc.1030.0186. Google Scholar [13] R. Savaskan and L. Van Wassenhove, Reverse channel design: The case of competing retailers, Management Science, 52 (2006), 1-14. doi: 10.1287/mnsc.1050.0454. Google Scholar [14] E. Stavrulaki, Inventory decisions for substitutable products with stock-dependent demand, International Journal of Production Economics, 129 (2011), 65-78. doi: 10.1016/j.ijpe.2010.09.002. Google Scholar [15] X. Sun, Y. Li and K. Govindan, Integrating dynamic acquisition pricing and remanufacturing decisions under random price-sensitive returns, The International Journal of Advanced Manufacturing Technology, 68 (2013), 933-947. doi: 10.1007/s00170-013-4954-5. Google Scholar [16] C. Tang and R. Yin, Joint ordering and pricing strategies for managing substitutable products, Production and Operations Management, 16 (2007), 138-153. doi: 10.1111/j.1937-5956.2007.tb00171.x. Google Scholar [17] M. Trivedi, Distribution channels: An extension of exclusive retailership, Management Science, 44 (1998), 896-909. doi: 10.1287/mnsc.44.7.896. Google Scholar [18] A. A. Tsay and N. Agrawal, Channel dynamics under price and service competition, Manufacturing & Service Operations Management, 2 (2000), 372-391. doi: 10.1287/msom.2.4.372.12342. Google Scholar [19] C. Wu, C. Chen and C. Hsieh, Competitive pricing decisions in a two-echelon supply chain with horizontal and vertical competition, International Journal of Production Economics, 135 (2012), 265-274. doi: 10.1016/j.ijpe.2011.07.020. Google Scholar [20] Y. Xia, Competitive strategies and market segmentation for suppliers with substitutable products, European Journal of Operational Research, 210 (2011), 194-203. doi: 10.1016/j.ejor.2010.09.028. Google Scholar [21] X. Zhao and D. Atkins, Newsvendors under simultaneous price and inventory competition, Manufacturing and Service Operations Management, 10 (2008), 539-546. doi: 10.1287/msom.1070.0186. Google Scholar [22] J. Zhao, W. Tang and J. Wei, Pricing decision for substitutable products with retail competition in a fuzzy environment, International Journal of Production Economics, 135 (2012), 144-153. doi: 10.1016/j.ijpe.2010.12.024. Google Scholar
show all references
##### References:
[1] S. Choi, Price competition in a channel structure with common retailer, Marketing Science, 10 (1991), 271-296. doi: 10.1287/mksc.10.4.271. Google Scholar [2] S. Choi, Price competition in a duopoly common retailer channel, Journal of Retailing, 72 (1996), 117-134. doi: 10.1016/S0022-4359(96)90010-X. Google Scholar [3] T. Choi, Y. Li and L. Xu, Channel leadership, performance and coordination in closed loop supply chains, International Journal of Production Economics, 146 (2013), 371-380. doi: 10.1016/j.ijpe.2013.08.002. Google Scholar [4] X. Hong, X. Wang, D. Wang and H. Zhang, Decision models of closed-loop supply chain with remanufacturing under hybrid dual-channel collection, The International Journal of Advanced Manufacturing Technology, 68 (2013), 1851-1865. doi: 10.1007/s00170-013-4982-1. Google Scholar [5] E. Lee and R. Staelin, Vertical strategic interaction: Implications for channel pricing strategy, Marketing Science, 16 (1997), 185-207. doi: 10.1287/mksc.16.3.185. Google Scholar [6] X. Li, Y. Li and S. Saghafian, A Hybrid Manufacturing/Remanufacturing System with Random Remanufacturing Yield and Market-Driven Product Acquisition, IEEE Transactions on Engineering Management, 60 (2013), 424-437. doi: 10.1109/TEM.2012.2215873. Google Scholar [7] T. W. McGuire and R. Staelin, An industry equilibrium analysis of downstream vertical integration, Marketing Science, 2 (1983), 161-192. Google Scholar [8] S. Mitraa and S. Webster, Competition in remanufacturing and the effects of government subsidies, International Journal of Production Economics, 111 (2008), 287-298. doi: 10.1016/j.ijpe.2007.02.042. Google Scholar [9] B. Mishra and S. Raghunathan, Retail-vs. vendor-managed inventory and brand competition, Management Science, 50 (2004), 445-457. Google Scholar [10] S. Netessine and N. Rudi, Centralized and competitive inventory models with demand substitution, Operations Research, 51 (2003), 329-335. doi: 10.1287/opre.51.2.329.12788. Google Scholar [11] B. Pasternack and Z. Drezner, Optimal inventory policies for substitutable commodities with stochastic demand, Naval Research Logistics, 38 (1991), 221-240. doi: 10.1002/1520-6750(199104)38:2<221::AID-NAV3220380208>3.0.CO;2-7. Google Scholar [12] R. C. Savaskan, S. Bhattacharya and L. N. Van Wassenhove, Closed-loop supply chain models with product remanufacturing, Management Science, 50 (2004), 239-252. doi: 10.1287/mnsc.1030.0186. Google Scholar [13] R. Savaskan and L. Van Wassenhove, Reverse channel design: The case of competing retailers, Management Science, 52 (2006), 1-14. doi: 10.1287/mnsc.1050.0454. Google Scholar [14] E. Stavrulaki, Inventory decisions for substitutable products with stock-dependent demand, International Journal of Production Economics, 129 (2011), 65-78. doi: 10.1016/j.ijpe.2010.09.002. Google Scholar [15] X. Sun, Y. Li and K. Govindan, Integrating dynamic acquisition pricing and remanufacturing decisions under random price-sensitive returns, The International Journal of Advanced Manufacturing Technology, 68 (2013), 933-947. doi: 10.1007/s00170-013-4954-5. Google Scholar [16] C. Tang and R. Yin, Joint ordering and pricing strategies for managing substitutable products, Production and Operations Management, 16 (2007), 138-153. doi: 10.1111/j.1937-5956.2007.tb00171.x. Google Scholar [17] M. Trivedi, Distribution channels: An extension of exclusive retailership, Management Science, 44 (1998), 896-909. doi: 10.1287/mnsc.44.7.896. Google Scholar [18] A. A. Tsay and N. Agrawal, Channel dynamics under price and service competition, Manufacturing & Service Operations Management, 2 (2000), 372-391. doi: 10.1287/msom.2.4.372.12342. Google Scholar [19] C. Wu, C. Chen and C. Hsieh, Competitive pricing decisions in a two-echelon supply chain with horizontal and vertical competition, International Journal of Production Economics, 135 (2012), 265-274. doi: 10.1016/j.ijpe.2011.07.020. Google Scholar [20] Y. Xia, Competitive strategies and market segmentation for suppliers with substitutable products, European Journal of Operational Research, 210 (2011), 194-203. doi: 10.1016/j.ejor.2010.09.028. Google Scholar [21] X. Zhao and D. Atkins, Newsvendors under simultaneous price and inventory competition, Manufacturing and Service Operations Management, 10 (2008), 539-546. doi: 10.1287/msom.1070.0186. Google Scholar [22] J. Zhao, W. Tang and J. Wei, Pricing decision for substitutable products with retail competition in a fuzzy environment, International Journal of Production Economics, 135 (2012), 144-153. doi: 10.1016/j.ijpe.2010.12.024. Google Scholar
changes of optimal prices with β in MSM model
changes of optimal remanufacturing effort with β in MSM model
changes of optimal profits with β in MSM model
changes of optimal prices with γ in MSM model
changes of optimal remanufacturing effort with γ in MSM model
changes of optimal profits with γ in MSM model
changes of optimal prices with a in MSM model
changes of optimal remanufacturing effort with a in MSM model
changes of optimal profits with a in MSM model
changes of optimal prices with B in MSM model
changes of optimal remanufacturing effort with B in MSM model
changes of optimal profits with B in MSM model
changes of optimal prices with δ in MSM model
changes of optimal remanufacturing effort with δ in MSM model
changes of optimal profits with δ in MSM model
Chain members' maximum profits in different decision models
Scenario $\pi_{m1}+\pi_{m2}+\pi_{r}$ $\pi_{m1}$ $\pi_{m2}$ $\pi_{r}$ MSB 13549.7 2701.4 2713.6 8134.7 MSM 13361.2 2744.0 2976.9 7640.3 MSR 13357.5 2966.4 2756.9 7634.2 RSB 13553.5 1351.7 1353.8 10848.0 RSM 13066.1 1332.4 1672.7 10061.0 RSR 13047.6 1667.6 1343.0 10037.0 NG 13082.6 4103.7 4124.0 4854.9
Scenario $\pi_{m1}+\pi_{m2}+\pi_{r}$ $\pi_{m1}$ $\pi_{m2}$ $\pi_{r}$ MSB 13549.7 2701.4 2713.6 8134.7 MSM 13361.2 2744.0 2976.9 7640.3 MSR 13357.5 2966.4 2756.9 7634.2 RSB 13553.5 1351.7 1353.8 10848.0 RSM 13066.1 1332.4 1672.7 10061.0 RSR 13047.6 1667.6 1343.0 10037.0 NG 13082.6 4103.7 4124.0 4854.9
Optimal prices and remanufacturing effort in different decision models
Scenario $p_1^*$ $w_1^*$ $p_2^*$ $w_2^*$ $\tau^*$ MSB 257.45 114.89 257.34 114.68 0.28575 MSM 264.20 128.40 259.58 119.17 0.29929 MSR 259.72 119.44 264.16 128.32 0.25393 RSB 257.39 67.54 257.18 66.97 0.28650 RSM 272.92 82.92 262.32 72.32 0.31775 RSR 262.72 72.72 273.16 83.16 0.21194 NG 151.35 102.70 151.08 102.16 0.49893
Scenario $p_1^*$ $w_1^*$ $p_2^*$ $w_2^*$ $\tau^*$ MSB 257.45 114.89 257.34 114.68 0.28575 MSM 264.20 128.40 259.58 119.17 0.29929 MSR 259.72 119.44 264.16 128.32 0.25393 RSB 257.39 67.54 257.18 66.97 0.28650 RSM 272.92 82.92 262.32 72.32 0.31775 RSR 262.72 72.72 273.16 83.16 0.21194 NG 151.35 102.70 151.08 102.16 0.49893
Notations used in the Problem Description
Symbol Description $p_i$ unit retail price of product $i,~i=1,2,$ $w_i$ unit wholesale price of product $i,$ $c_{mi}$ unit manufacturing cost of product $i,~i=1,2$ $c_{r}$ unit remanufacturing cost of product $2$ $\beta$ self-price sensitivity of a product's demand to its own price $\gamma$ cross-price sensitivity of one product's demand to the other product's price $D_i$ the demand for product $i,~i=1,2$ $\tau$ the manufacturer 2's remanufacturing effort $B$ scaling parameter of the manufacturer 2's recycling process
Symbol Description $p_i$ unit retail price of product $i,~i=1,2,$ $w_i$ unit wholesale price of product $i,$ $c_{mi}$ unit manufacturing cost of product $i,~i=1,2$ $c_{r}$ unit remanufacturing cost of product $2$ $\beta$ self-price sensitivity of a product's demand to its own price $\gamma$ cross-price sensitivity of one product's demand to the other product's price $D_i$ the demand for product $i,~i=1,2$ $\tau$ the manufacturer 2's remanufacturing effort $B$ scaling parameter of the manufacturer 2's recycling process
[1] Tinggui Chen, Yanhui Jiang. Research on operating mechanism for creative products supply chain based on game theory. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1103-1112. doi: 10.3934/dcdss.2015.8.1103 [2] Ali Naimi Sadigh, S. Kamal Chaharsooghi, Majid Sheikhmohammady. A game theoretic approach to coordination of pricing, advertising, and inventory decisions in a competitive supply chain. Journal of Industrial & Management Optimization, 2016, 12 (1) : 337-355. doi: 10.3934/jimo.2016.12.337 [3] Yeong-Cheng Liou, Siegfried Schaible, Jen-Chih Yao. Supply chain inventory management via a Stackelberg equilibrium. Journal of Industrial & Management Optimization, 2006, 2 (1) : 81-94. doi: 10.3934/jimo.2006.2.81 [4] Jingming Pan, Wenqing Shi, Xiaowo Tang. Pricing and ordering strategies of supply chain with selling gift cards. Journal of Industrial & Management Optimization, 2018, 14 (1) : 349-369. doi: 10.3934/jimo.2017050 [5] Mitali Sarkar, Young Hae Lee. Optimum pricing strategy for complementary products with reservation price in a supply chain model. Journal of Industrial & Management Optimization, 2017, 13 (3) : 1553-1586. doi: 10.3934/jimo.2017007 [6] Lisha Wang, Huaming Song, Ding Zhang, Hui Yang. Pricing decisions for complementary products in a fuzzy dual-channel supply chain. Journal of Industrial & Management Optimization, 2019, 15 (1) : 343-364. doi: 10.3934/jimo.2018046 [7] Amin Aalaei, Hamid Davoudpour. Two bounds for integrating the virtual dynamic cellular manufacturing problem into supply chain management. Journal of Industrial & Management Optimization, 2016, 12 (3) : 907-930. doi: 10.3934/jimo.2016.12.907 [8] Jun Li, Hairong Feng, Kun-Jen Chung. Using the algebraic approach to determine the replenishment optimal policy with defective products, backlog and delay of payments in the supply chain management. Journal of Industrial & Management Optimization, 2012, 8 (1) : 263-269. doi: 10.3934/jimo.2012.8.263 [9] Xiaohong Chen, Kui Li, Fuqiang Wang, Xihua Li. Optimal production, pricing and government subsidy policies for a closed loop supply chain with uncertain returns. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-26. doi: 10.3934/jimo.2019008 [10] Jiuping Xu, Pei Wei. Production-distribution planning of construction supply chain management under fuzzy random environment for large-scale construction projects. Journal of Industrial & Management Optimization, 2013, 9 (1) : 31-56. doi: 10.3934/jimo.2013.9.31 [11] Lianju Sun, Ziyou Gao, Yiju Wang. A Stackelberg game management model of the urban public transport. Journal of Industrial & Management Optimization, 2012, 8 (2) : 507-520. doi: 10.3934/jimo.2012.8.507 [12] David W. K. Yeung, Yingxuan Zhang, Hongtao Bai, Sardar M. N. Islam. Collaborative environmental management for transboundary air pollution problems: A differential levies game. Journal of Industrial & Management Optimization, 2017, 13 (5) : 0-0. doi: 10.3934/jimo.2019121 [13] Juliang Zhang, Jian Chen. Information sharing in a make-to-stock supply chain. Journal of Industrial & Management Optimization, 2014, 10 (4) : 1169-1189. doi: 10.3934/jimo.2014.10.1169 [14] Juliang Zhang. Coordination of supply chain with buyer's promotion. Journal of Industrial & Management Optimization, 2007, 3 (4) : 715-726. doi: 10.3934/jimo.2007.3.715 [15] Na Song, Ximin Huang, Yue Xie, Wai-Ki Ching, Tak-Kuen Siu. Impact of reorder option in supply chain coordination. Journal of Industrial & Management Optimization, 2017, 13 (1) : 449-475. doi: 10.3934/jimo.2016026 [16] Liping Zhang. A nonlinear complementarity model for supply chain network equilibrium. Journal of Industrial & Management Optimization, 2007, 3 (4) : 727-737. doi: 10.3934/jimo.2007.3.727 [17] Joseph Geunes, Panos M. Pardalos. Introduction to the Special Issue on Supply Chain Optimization. Journal of Industrial & Management Optimization, 2007, 3 (1) : i-ii. doi: 10.3934/jimo.2007.3.1i [18] Jia Shu, Jie Sun. Designing the distribution network for an integrated supply chain. Journal of Industrial & Management Optimization, 2006, 2 (3) : 339-349. doi: 10.3934/jimo.2006.2.339 [19] Jun Pei, Panos M. Pardalos, Xinbao Liu, Wenjuan Fan, Shanlin Yang, Ling Wang. Coordination of production and transportation in supply chain scheduling. Journal of Industrial & Management Optimization, 2015, 11 (2) : 399-419. doi: 10.3934/jimo.2015.11.399 [20] Feimin Zhong, Wei Zeng, Zhongbao Zhou. Mechanism design in a supply chain with ambiguity in private information. Journal of Industrial & Management Optimization, 2020, 16 (1) : 261-287. doi: 10.3934/jimo.2018151
2018 Impact Factor: 1.025
## Metrics
• PDF downloads (53)
• HTML views (309)
• Cited by (1)
## Other articlesby authors
• on AIMS
• on Google Scholar
[Back to Top]
|
2019-12-08 03:37:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46989569067955017, "perplexity": 11773.027167623652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540504338.31/warc/CC-MAIN-20191208021121-20191208045121-00190.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-common-core/chapter-1-expressions-equations-and-inequalities-get-ready-page-1/15
|
## Algebra 2 Common Core
$-\frac{14}{3}$
Simplify the terms to $\frac{8}{3}-\frac{22}{3}$ Since both terms have the same denominator (the number on the bottom), simply subtract 8 by 22. $8-22 = -14$ Add back the denominator $-\frac{14}{3}$
|
2020-05-28 04:31:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9289259910583496, "perplexity": 896.6090161212842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396495.25/warc/CC-MAIN-20200528030851-20200528060851-00476.warc.gz"}
|
https://chem.libretexts.org/Core/Organic_Chemistry/Chirality/Absolute_Configuration%2C_R-S_Sequence_Rules
|
# Absolute Configuration: R-S Sequence Rules
To name the enantiomers of a compound unambiguously, their names must include the "handedness" of the molecule. The method for this is formally known as R/S nomenclature.
### Introduction
The method of unambiguously assigning the handedness of molecules was originated by three chemists: R.S. Cahn, C. Ingold, and V. Prelog and, as such, is also often called the Cahn-Ingold-Prelog rules. In addition to the Cahn-Ingold system, there are two ways of experimentally determining the absolute configuration of an enantiomer:
1. X-ray diffraction analysis. Note that there is no correlation between the sign of rotation and the structure of a particular enantiomer.
2. Chemical correlation with a molecule whose structure has already been determined via X-ray diffraction.
However, for non-laboratory purposes, it is beneficial to focus on the R/S system. The sign of optical rotation, although different for the two enantiomers of a chiral molecule,at the same temperature, cannot be used to establish the absolute configuration of an enantiomer; this is because the sign of optical rotation for a particular enantiomer may change when the temperature changes.
### Stereocenters are labeled R or S
The "right hand" and "left hand" nomenclature is used to name the enantiomers of a chiral compound. The stereocenters are labeled as R or S.
Consider the first picture: a curved arrow is drawn from the highest priority (1) substituent to the lowest priority (4) substituent. If the arrow points in a counterclockwise direction (left when leaving the 12 o' clock position), the configuration at stereocenter is considered S ("Sinister" → Latin= "left"). If, however, the arrow points clockwise,(Right when leaving the 12 o' clock position) then the stereocenter is labeled R ("Rectus" → Latin= "right"). The R or S is then added as a prefix, in parenthesis, to the name of the enantiomer of interest.
Example 1
(R)-2-Bromobutane
(S)-2,3- Dihydroxypropanal
### Sequence rules to assign priorities to substituents
Before applying the R and S nomenclature to a stereocenter, the substituents must be prioritized according to the following rules:
#### Rule 1
First, examine at the atoms directly attached to the stereocenter of the compound. A substituent with a higher atomic number takes precedence over a substituent with a lower atomic number. Hydrogen is the lowest possible priority substituent, because it has the lowest atomic number.
1. When dealing with isotopes, the atom with the higher atomic mass receives higher priority.
2. When visualizing the molecule, the lowest priority substituent should always point away from the viewer (a dashed line indicates this). To understand how this works or looks, imagine that a clock and a pole. Attach the pole to the back of the clock, so that when when looking at the face of the clock the pole points away from the viewer in the same way the lowest priority substituent should point away.
3. Then, draw an arrow from the highest priority atom to the 2nd highest priority atom to the 3rd highest priority atom. Because the 4th highest priority atom is placed in the back, the arrow should appear like it is going across the face of a clock. If it is going clockwise, then it is an R-enantiomer; If it is going counterclockwise, it is an S-enantiomer.
When looking at a problem with wedges and dashes, if the lowest priority atom is not on the dashed line pointing away, the molecule must be rotated.
Remember that
• Wedges indicate coming towards the viewer.
• Dashes indicate pointing away from the viewer.
#### Rule 2
If there are two substituents with equal rank, proceed along the two substituent chains until there is a point of difference. First, determine which of the chains has the first connection to an atom with the highest priority (the highest atomic number). That chain has the higher priority.
If the chains are similar, proceed down the chain, until a point of difference.
For example: an ethyl substituent takes priority over a methyl substituent. At the connectivity of the stereocenter, both have a carbon atom, which are equal in rank. Going down the chains, a methyl has only has hydrogen atoms attached to it, whereas the ethyl has another carbon atom. The carbon atom on the ethyl is the first point of difference and has a higher atomic number than hydrogen; therefore the ethyl takes priority over the methyl.
#### Rule 3
If a chain is connected to the same kind of atom twice or three times, check to see if the atom it is connected to has a greater atomic number than any of the atoms that the competing chain is connected to.
• If none of the atoms connected to the competing chain(s) at the same point has a greater atomic number: the chain bonded to the same atom multiple times has the greater priority
• If however, one of the atoms connected to the competing chain has a higher atomic number: that chain has the higher priority.
Example 2
A 1-methylethyl substituent takes precedence over an ethyl substituent. Connected to the first carbon atom, ethyl only has one other carbon, whereas the 1-methylethyl has two carbon atoms attached to the first; this is the first point of difference. Therefore, 1-methylethyl ranks higher in priority than ethyl, as shown below:
However:
Remember that being double or triple bonded to an atom means that the atom is connected to the same atom twice. In such a case, follow the same method as above.
Caution!!
Keep in mind that priority is determined by the first point of difference along the two similar substituent chains. After the first point of difference, the rest of the chain is irrelevant.
When looking for the first point of difference on similar substituent chains, one may encounter branching. If there is branching, choose the branch that is higher in priority. If the two substituents have similar branches, rank the elements within the branches until a point of difference.
After all your substituents have been prioritized in the correct manner, you can now name/label the molecule R or S.
1. Put the lowest priority substituent in the back (dashed line).
2. Proceed from 1 to 2 to 3. (it is helpful to draw or imagine an arcing arrow that goes from 1--> 2-->3)
3. Determine if the direction from 1 to 2 to 3 clockwise or counterclockwise.
i) If it is clockwise it is R.
ii) if it is counterclockwise it is S.
USE YOUR MODELING KIT: Models assist in visualizing the structure. When using a model, make sure the lowest priority is pointing away from you. Then determine the direction from the highest priority substituent to the lowest: clockwise (R) or counterclockwise (S).
IF YOU DO NOT HAVE A MODELING KIT: remember that the dashes mean the bond is going into the screen and the wedges means that bond is coming out of the screen. If the lowest priority bond is not pointing to the back, mentally rotate it so that it is. However, it is very useful when learning organic chemistry to use models.
If you have a modeling kit use it to help you solve the following practice problems.
### Problems
Are the following R or S?
### Solutions
1. S: I > Br > F > H. The lowest priority substituent, H, is already going towards the back. It turns left going from I to Br to F, so it's a S.
2. R: Br > Cl > CH3 > H. You have to switch the H and Br in order to place the H, the lowest priority, in the back. Then, going from Br to Cl, CH3 is turning to the right, giving you a R.
3. Neither R or S: This molecule is achiral. Only chiral molecules can be named R or S.
4. R: OH > CN > CH2NH2 > H. The H, the lowest priority, has to be switched to the back. Then, going from OH to CN to CH2NH2, you are turning right, giving you a R.
5. (5) S: $$\ce{-COOH}$$ > $$\ce{-CH_2OH}$$ > $$\ce{C#CH}$$ > $$\ce{H}$$. Then, going from $$\ce{-COOH}$$ to $$\ce{-CH_2OH}$$ to $$\ce{-C#CH}$$ you are turning left, giving you a S configuration.
### References
1. Schore and Vollhardt. Organic Chemistry Structure and Function. New York:W.H. Freeman and Company, 2007.
2. McMurry, John and Simanek, Eric. Fundamentals of Organic Chemistry. 6th Ed. Brooks Cole, 2006.
### Contributors
• Ekta Patel (UCD), Ifemayowa Aworanti (University of Maryland Baltimore County)
|
2017-04-26 06:11:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45220091938972473, "perplexity": 1935.120851612874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121165.73/warc/CC-MAIN-20170423031201-00609-ip-10-145-167-34.ec2.internal.warc.gz"}
|
http://zbmath.org/?format=complete&q=an:1119.11001
|
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Number theory. Volume I: Tools and Diophantine equations. (English) Zbl 1119.11001
Graduate Texts in Mathematics 239. New York, NY: Springer (ISBN 978-0-387-49922-2/hbk). xxiii, 650 p. EUR 46.95/net; SFR 77.00; \$ 59.95; £ 36.00 (2007).
Although the interest in Diophantine problems has fueled a lot of research in algebraic number theory and arithmetic geometry, textbooks dedicated to general Diophantine analysis (as opposed to books on problems of a special type, such as elliptic curves or Fermat’s Last Theorem) are rare. The few books that come to mind are R. D. Carmichael’s, Diophantine Analysis [New York: John Wiley and Sons. VI (1915; JFM 45.0283.11)], L. J. Mordell’s famous “Diophantine Equations” [Pure and Applied Mathematics, 30. London-New York: Academic Press (1969; Zbl 0188.34503)] and S. Lang’s Diophantine Geometry [Interscience Tracs in Pure and Applied Mathematics. 11. New York and London: Interscience Publishers, a division of John Wiley and Sons. (1962; Zbl 0115.38701)].
The book under review deals with Diophantine analysis from a number-theoretic point of view. Its author shares Mordell’s taste for concrete Diophantine problems, but wisely avoids to follow the latter’s concept: Mordell’s “classification” of Diophantine problems was already outdated when his book appeared. In fact, the appeal of a given problem is usually measured by the techniques created for solving it, and it is therefore only consequent to give an exposition oriented towards the tools rather than the problems.
The first volume starts with a brief (historical) introduction to Diophantine equations, and then presents the basic tools of the trade, mostly with proofs. Chapter 2 introduces residue classes, quadratic reciprocity, lattices and LLL-reduction, finite fields, Gauss and Jacobi sums, and the Weil bounds. In Chapter 3, Cohen reviews algebraic number theory, with an emphasis on cyclotomic fields and Stickelberger’s theorem. Chapters 4 and 5 give the basic theory of $p$-adic number fields and their extensions, and the theory of quadratic forms from the viewpoint of Local-Global Principles.
The second part of volume I deals with Diophantine equations: in Chapter 6, problems of degree $\le 4$ as well as Fermat’s Last Theorem are discussed. Chapter 7 provides the relevant results from the theory of elliptic curves, and Chapter 8 discusses Diophantine aspects of elliptic curves, namely descent, $L$-series, Heegner points, and integral points via elliptic logarithms.
It should be clear from this brief description of the content that the author’s aim is not primarily the algorithmic aspect of the solution of Diophantine equations (this is discussed in detail in N. P. Smart’s book [The algorithmic resolution of Diophantine equations. London Mathematical Society Student Texts. 41. Cambridge: Cambridge University Press (1998; Zbl 0907.11001)]) but rather the mathematics that lies behind some of the most spectacular results of the last few years, in particular Fermat’s Last Theorem and Catalan’s equation. Each chapter ends with exercises, ranging from simple to quite challenging problems. The clarity of the exposition is the one we expect from the author of two highly successful books on computational number theory [Zbl 0786.11071; Zbl 0977.11056], and makes this volume a must-read for researchers in Diophantine analysis.
##### MSC:
11-01 Textbooks (number theory) 11-02 Research monographs (number theory) 11Rxx Algebraic number theory: global fields 11Sxx Algebraic number theory: local and $p$-adic fields 11Dxx Diophantine equations 11G05 Elliptic curves over global fields
|
2014-04-16 13:32:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.673327624797821, "perplexity": 2387.6742326336166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://www.shaalaa.com/textbook-solutions/c/ncert-solutions-mathematics-textbook-class-12-chapter-5-continuity-and-differentiability_32
|
Share
# NCERT solutions for Class 12 Mathematics chapter 5 - Continuity and Differentiability
## Chapter 5: Continuity and Differentiability
#### Chapter 5: Continuity and Differentiability solutions [Pages 159 - 161]
Q 1 | Page 159
Prove that the function f (x) = 5x – 3 is continuous at x = 0, at x = – 3 and at x = 5.
Q 1.3 | Page 159
Examine the following functions for continuity.
f(x) = (x^2 - 25)/(x + 5), x != -5
Q 2 | Page 159
Examine the continuity of the function f (x) = 2x2 – 1 at x = 3.
Q 3.1 | Page 159
Examine the following functions for continuity.
f (x) = x – 5
Q 3.2 | Page 159
Examine the following functions for continuity
1/(x - 5), x != 5
Q 3.4 | Page 159
Examine the following functions for continuity
f(x) = | x – 5|
Q 4 | Page 159
Prove that the function f(x) = x^n is continuous at x = n, where n is a positive integer
Q 5 | Page 159
Is the function f defined by f(x)= {(x, if x<=1),(5, if x > 1):}
continuous at x = 0? At x = 1? At x = 2?
Q 6 | Page 159
Find all points of discontinuity of f, where f is defined by
f(x) = {(2x +3, if zx <=2),(2x - 3, if x > 2):}
Q 7 | Page 159
Find all points of discontinuity of f, where f is defined by f(x) = {(|x|+3, if x<= -3),(-2x, if -3 < x < 3),(6x + 2, if x >= 3):}
Q 8 | Page 159
Find all points of discontinuity of f, where f is defined by f(x) = {(|x|/x , if x != 0),(0, if x = 0):}
Q 9 | Page 159
Find all points of discontinuity of f, where f is defined by
f(x) = {(x/|x|, ","if x < 0),(-1, ","if x >= 0):}
Q 10 | Page 159
Find all points of discontinuity of f, where f is defined by
f(x) = {(x+1, "," if x >= 1),(x^2 + 1, ","if x < 1):}
Q 11 | Page 159
Find all points of discontinuity of f, where f is defined by f(x) = {(x^3 - 3, if x <= 2),(x^2 + 1, if x > 2):}
Q 12 | Page 159
Find all points of discontinuity of f, where f is defined by f(x) = {(x^10 - 1, ","if x <= 1),(x^2, ","if x > 1):}
Q 13 | Page 159
Is the function defined by
f(x) = {(x+5, if x <= 1),(x -5, if x > 1):} a continuous function?
Q 14 | Page 160
Discuss the continuity of the function f, where f is defined by
f(x) = {(3, ","if 0 <= x <= 1),(4, ","if 1 < x < 3),(5, ","if 3 <= x <= 10):}
Q 15 | Page 160
Discuss the continuity of the function f, where f is defined by
f(x) = {(2x , ","if x < 0),(0, "," if 0 <= x <= 1),(4x, "," if x > 1):}
Q 16 | Page 160
Discuss the continuity of the function f, where f is defined by
f(x) = {(-2,"," if x <= -1),(2x, "," if -1 < x <= 1),(2, "," if x > 1):}
Q 17 | Page 160
Find the relationship between a and b so that the function f defined by f(x)= {(az + 1, if x<= 3),(bx + 3, if x > 3):} is continuous at = 3.
Q 18 | Page 160
For what value of lambda is the function defined by
f(x) = {(lambda(x^2 - 2x), "," if x <= 0),(4x+ 1, "," if x > 0):} continuous at x = 0? What about continuity at x = 1?
Q 19 | Page 160
Show that the function defined by g(x) = x = [x] is discontinuous at all integral point. Here [x] denotes the greatest integer less than or equal to x.
Q 20 | Page 160
Is the function defined by f(x) = x^2 - sin x + 5 continuous at = π?
Q 21 | Page 160
Discuss the continuity of the following functions.
(a) f (x) = sin x + cos x
(b) f (x) = sin x − cos x
(c) f (x) = sin x × cos x
Q 22 | Page 160
Discuss the continuity of the cosine, cosecant, secant and cotangent functions,
Q 23 | Page 160
Find the points of discontinuity of f, where
f(x) = {((sinx)/x, "," if x < 0),(x + 1, "," if x >= 0):}
Q 24 | Page 160
Determine if f defined by
f(x) = {(x^2 sin 1/x, "," if x != 0),(0, "," if x = 0):} is a continuous function?
Q 25 | Page 161
Examine the continuity of f, where f is defined by
f(x) = {(sin x - cos x, if x != 0),(-1, "," if x = 0):}
Q 26 | Page 161
Find the values of so that the function f is continuous at the indicated point.
f(x) = {((kcosx)/(pi-2x), "," if x != pi/2),(3, "," if x = pi/2):} " at x =" pi/2
Q 27 | Page 161
Find the values of so that the function f is continuous at the indicated point.
f(x) = {(kx^2, "," if x<= 2),(3, "," if x > 2):} " at x" = 2
Q 28 | Page 161
Find the values of so that the function f is continuous at the indicated point.
f(x) = {(kx +1, if x<= pi),(cos x, if x > pi):} " at x " = pi
Q 29 | Page 161
Find the values of so that the function f is continuous at the indicated point.
f(x) = {(kx + 1, "," if x <= 5),(3x - 5, "," if x > 5):} " at x " = 5
Q 30 | Page 161
Find the values of a and b such that the function defined by
f(x) = {(5, "," if x <= 2),(ax +b, "," if 2 < x < 10),(21, "," if x >= 10):}
is a continuous function.
Q 31 | Page 161
Show that the function defined by f (x) = cos (x2) is a continuous function.
Q 32 | Page 161
Show that the function defined by f(x) = |cos x| is a continuous function.
Q 33 | Page 161
Examine sin |x| is a continuous function.
Q 34 | Page 161
Find all the points of discontinuity of defined by f(x) = |x| - |x + 1|.
#### Chapter 5: Continuity and Differentiability solutions [Page 166]
Q 1 | Page 166
Differentiate the functions with respect to x.
sin (x2 + 5)
Q 2 | Page 166
Differentiate the functions with respect to x.
cos (sin x)
Q 3 | Page 166
Differentiate the functions with respect to x.
sin (ax + b)
Q 4 | Page 166
Differentiate the functions with respect to x.
sec(tan (sqrtx))
Q 5 | Page 166
Differentiate the functions with respect to x.
(sin (ax + b))/cos (cx + d)
Q 6 | Page 166
Differentiate the functions with respect to x
cos x^3. sin^2 (x^3)
Q 7 | Page 166
Differentiate the functions with respect to x
2sqrt(cot(x^2))
Q 8 | Page 166
Differentiate the functions with respect to x.
cos (sqrtx)
Q 10 | Page 166
Prove that the function given by f(x) = |x - 1|, x in R is notdifferentiable at x = 1.
#### Chapter 5: Continuity and Differentiability solutions [Page 169]
Q 1 | Page 169
Find dy/dx
2x + 3y = sin x
Q 2 | Page 169
Find dy/dx
2x + 3y = sin y
Q 3 | Page 169
Find dy/dx
ax + by2 = cos y
Q 4 | Page 169
Find dy/dx
xy + y2 = tan x + y
Q 5 | Page 169
Find dx/dy
x2 + xy + y2 = 100
Q 6 | Page 169
Find dy/dx
x3 + x2y + xy2 + y3 = 81
Q 7 | Page 169
Find dy/dx
sin2 y + cos xy = Π
Q 8 | Page 169
Find dy/dx
sin2 x + cos2 y = 1
Q 9 | Page 169
Find dy/dx
y = sin^(-1)((2x)/(1+x^2))
Q 10 | Page 169
Find dy/dx
y = tan^(-1) ((3x -x^3)/(1 - 3x^2)), - 1/sqrt3 < x < 1/sqrt3
Q 11 | Page 169
Find dy/dx
y = cos^(-1) ((1-x^2)/(1+x^2)), 0 < x < 1
Q 12 | Page 169
Find dy/dx
y = sin^(-1) ((1-x^2)/(1+x^2)), 0 < x < 1
Q 13 | Page 169
Find dx/dy
y = cos^(-1) ((2x)/(1+x^2)), -1 < x < 1
Q 14 | Page 169
Find dy/dx
y = sin^(-1)(2xsqrt(1-x^2)), -1/sqrt2 < x < 1/sqrt2
Q 15 | Page 169
Find dy/dx
y = sec^(-1) (1/(2x^2 - 1)), 0 < x < 1/sqrt2
#### Chapter 5: Continuity and Differentiability solutions [Pages 147 - 174]
Q 1 | Page 174
Differentiate the following w.r.t. x:
e^x/sinx
Q 2 | Page 147
Differentiate the following w.r.t. x: e^(sin^(-1) x)
Q 3 | Page 174
Differentiate the following w.r.t. x: e^(x^3)
Q 4 | Page 174
Differentiate the following w.r.t. x
sin (tan–1 e–x)
Q 5 | Page 174
Differentiate the following w.r.t. x:
log(cos e^x)
Q 6 | Page 174
Differentiate the following w.r.t. x:
e^x + e^(x^2) + ....+ e^(x^3)
Q 7 | Page 174
Differentiate the following w.r.t. x:
sqrt(e^(sqrtx)), x > 0
Q 8 | Page 174
Differentiate the following w.r.t. x: log (log x), x > 1
Q 9 | Page 174
Differentiate the following w.r.t. x
cos x/log x, x >0
Q 10 | Page 174
Differentiate the following w.r.t. x:
cos (log x + ex), x > 0
#### Chapter 5: Continuity and Differentiability solutions [Pages 178 - 179]
Q 1 | Page 178
Differentiate the function with respect to x
cos x . cos 2x . cos 3x
Q 2 | Page 178
Differentiate the function with respect to x.
sqrt(((x-1)(x-2))/((x-3)(x-4)(x-5)))
Q 3 | Page 178
Differentiate the function with respect to x.
(log x)^(cos x)
Q 4 | Page 178
Differentiate the function with respect to x.
x^x - 2^(sin x)
Q 5 | Page 178
Differentiate the function with respect to x.
(x + 3)2 . (x + 4)3 . (x + 5)4
Q 6 | Page 178
Differentiate the function with respect to x.
(x + 1/x)^x + x^((1+1/x))
Q 7 | Page 178
Differentiate the function with respect to x.
(log x)x + xlog x
Q 8 | Page 178
Differentiate the function with respect to x.
(sin x)^x + sin^(-1) sqrtx
Q 9 | Page 178
Differentiate the function with respect to x.
xsin x + (sin x)cos x
Q 10 | Page 178
Differentiate the function with respect to x.
x^(xcosx) + (x^2 + 1)/(x^2 -1)
Q 11 | Page 178
Differentiate the function with respect to x.
(x cos x)^x + (x sin x)^(1/x)
Q 12 | Page 178
Find dy/dx of function
xy + yx = 1
Q 13 | Page 178
Find dy/dx of Function yx = xy
Q 14 | Page 178
Find dy/dx of Function
(cos x)y = (cos y)x
Q 15 | Page 178
Find dy/dx of function
xy = e(x – y)
Q 16 | Page 178
Find the derivative of the function given by f (x) = (1 + x) (1 + x2) (1 + x4) (1 + x8) and hence find f ′(1).
Q 17 | Page 178
Differentiate (x2 – 5x + 8) (x3 + 7x + 9) in three ways mentioned
(i) by using product rule
(ii) by expanding the product to obtain a single polynomial.
(iii) by logarithmic differentiation.
Do they all give the same answer?d below:
Q 18 | Page 179
If uv and w are functions of x, then show that
d/dx(u.v.w) = (du)/dx v.w+u. (dv)/dx.w + u.v. (dw)/dx
in two ways-first by repeated application of product rule, second by logarithmic differentiation.
#### Chapter 5: Continuity and Differentiability solutions [Page 181]
Q 1 | Page 181
If x and y are connected parametrically by the equation, without eliminating the parameter, find dy/dx
x = 2at^2, y = at^4
Q 3 | Page 181
If x and y are connected parametrically by the equation, without eliminating the parameter, find dy/dx
x = sin ty = cos 2t
Q 4 | Page 181
If x and y are connected parametrically by the equation, without eliminating the parameter, find dy/dx
x = 4t, y = 4/y
Q 5 | Page 181
If x and y are connected parametrically by the equation, without eliminating the parameter, find dy/dx
x = cos θ – cos 2θ, y = sin θ – sin 2θ
Q 5.6 | Page 181
If x and y are connected parametrically by the equation, without eliminating the parameter, find dy/dx
x = a cos θy = b cos θ
Q 6 | Page 181
If x and y are connected parametrically by the equation, without eliminating the parameter, find dy/dx
x = a (θ – sin θ), y = a (1 + cos θ)
Q 7 | Page 181
If x and y are connected parametrically by the equation, without eliminating the parameter, find dy/dx
x = (sin^3t)/sqrt(cos 2t), y = (cos^3t)/sqrt(cos 2t)
Q 8 | Page 181
If x and y are connected parametrically by the equation, without eliminating the parameter, find dy/dx
x = a(cos t + log tan t/2), y = a sin t
Q 9 | Page 181
If x and y are connected parametrically by the equation, without eliminating the parameter, find dy/dx
x = a sec θ, y = b tan θ
Q 10 | Page 181
If x and y are connected parametrically by the equation, without eliminating the parameter, find dy/dx
x = a (cos θ + θ sin θ), y = a (sin θ – θ cos θ)
Q 11 | Page 181
if x = sqrt(a^(sin^(-1))), y = sqrt(a^(cos^(-1))) show that dy/dx = - y/x
#### Chapter 5: Continuity and Differentiability solutions [Pages 183 - 184]
Q 1 | Page 183
Find the second order derivatives of the function.
x2 + 3x + 2
Q 3 | Page 183
Find the second order derivatives of the function.
x . cos x
Q 4 | Page 183
Find the second order derivatives of the function.
log x
Q 5 | Page 183
Find the second order derivatives of the function.
x3 log x
Q 6 | Page 183
Find the second order derivatives of the function.
ex sin 5x
Q 7 | Page 183
Find the second order derivatives of the function.
e6x cos 3x
Q 8 | Page 183
Find the second order derivatives of the function.
tan–1 x
Q 9 | Page 183
Find the second order derivatives of the function.
log (log x)
Q 10 | Page 183
Find the second order derivatives of the function.
sin (log x)
Q 11 | Page 183
If y = 5 cos x – 3 sin x, prove that (d^2y)/(dx^2) + y = 0
Q 12 | Page 184
If y = cos–1 x, Find (d^2y)/dx^2 in terms of y alone.
Q 13 | Page 184
If y = 3 cos (log x) + 4 sin (log x), show that x2 y2 + xy1 + y = 0
Q 14 | Page 184
If y = Aemx + Benx, show that (d^2y)/dx^2 - (m+ n) (dy)/dx + mny = 0
Q 15 | Page 184
If y = 500e7x + 600e–7x, show that (d^2y)/(dx^2) = 49y
Q 16 | Page 184
If ey (x + 1) = 1, show that (d^2y)/(dx^2) =((dy)/(dx))^2
Q 17 | Page 184
If y = (tan–1 x)2, show that (x2 + 1)2 y2 + 2x (x2 + 1) y1 = 2
Q 183 | Page 183
Find the second order derivatives of the function. x^20
#### Chapter 5: Continuity and Differentiability solutions [Page 186]
Q 1 | Page 186
Verify Rolle’s theorem for the function f (x) = x2 + 2x – 8, x ∈ [– 4, 2].
Q 2.1 | Page 186
Examine if Rolle’s Theorem is applicable to any of the following functions. Can you say some thing about the converse of Rolle’s Theorem from these examples?
f (x) = [x] for x ∈ [5, 9]
Q 2.2 | Page 186
Examine if Rolle’s Theorem is applicable to any of the following functions. Can you say some thing about the converse of Rolle’s Theorem from these examples?
f (x) = [x] for x ∈ [– 2, 2]
Q 2.3 | Page 186
Examine if Rolle’s Theorem is applicable to any of the following functions. Can you say some thing about the converse of Rolle’s Theorem from these examples?
f (x) = x2 – 1 for x ∈ [1, 2]
Q 3 | Page 186
If f : [– 5, 5] → R is a differentiable function and if f ′(x) does not vanish anywhere, then prove that f (– 5) ≠ f (5).
Q 4 | Page 186
Verify Mean Value Theorem, if f (x) = x2 – 4x – 3 in the interval [a, b], where a = 1 and b = 4.
Q 5 | Page 186
Verify Mean Value Theorem, if f (x) = x3 – 5x2 – 3x in the interval [a, b], where a = 1 and b = 3. Find all c ∈ (1, 3) for which f ′(c) = 0.
Q 6 | Page 186
Examine the applicability of Mean Value Theorem for all three functions given in the above exercise 2.
#### Chapter 5: Continuity and Differentiability solutions [Pages 191 - 192]
Q 1 | Page 191
Differentiate w.r.t. x the function (3x2 – 9x + 5)9
Q 2 | Page 191
Differentiate w.r.t. x the function sin3 x + cos6 x
Q 3 | Page 191
Differentiate w.r.t. x the function (5x)3cos 2x
Q 4 | Page 191
Differentiate w.r.t. x the function sin^(–1)(xsqrtx ), 0 ≤ x ≤ 1
Q 5 | Page 191
Differentiate w.r.t. x the function (cos^(-1) x/2)/sqrt(2x+7), -2 < x < 2
Q 6 | Page 191
Differentiate w.r.t. x the function cot^(-1) [(sqrt(1+sinx) + sqrt(1-sinx))/(sqrt(1+sinx) - sqrt(1-sinx))], 0 < x < pi/2
Q 7 | Page 191
Differentiate w.r.t. x the function (log x)log x, x > 1
Q 8 | Page 191
Differentiate w.r.t. x the function cos (a cos x + b sin x), for some constant a and b.
Q 9 | Page 191
Differentiate w.r.t. x the function (sin x – cos x) (sin x – cos x), pi/4 < x < (3pi)/4
Q 10 | Page 191
Differentiate w.r.t. x the function xx + xa + ax + aa, for some fixed a > 0 and x > 0
Q 11 | Page 191
Differentiate w.r.t. x the function x^(x^2 -3) + (x -3)^(x^2), for x > 3
Q 12 | Page 191
Find dy/dx ,if y = 12 (1 – cos t), x = 10 (t – sin t), -pi/2< t< pi/2
Q 13 | Page 191
Find dy/dx , if y = sin–1 x + sin–1 sqrt(1-x^2), 0 < x < 1
Q 14 | Page 191
if xsqrt(1+y) + ysqrt(1+x) = 0, for, −1 < x <1, prove that dy/dx = 1/(1+ x)^2
Q 15 | Page 191
If (x – a)2 + (y – b)2 = c2, for some c > 0, prove that
[1+ (dy/dx)^2]^(3/2)/((d^2y)/dx^2) is a constant independent of a and b.
Q 16 | Page 192
If cos y = x cos (a + y), with cos a ≠ ± 1, prove that dy/dx = cos^2(a+y)/(sin a)
Q 17 | Page 192
If x = a (cos t + t sin t) and y = a (sin t – t cos t), find (d^2y)/dx^2
Q 18 | Page 192
If f (x) = |x|3, show that f ″(x) exists for all real x and find it.
Q 19 | Page 192
Using mathematical induction prove that d/(dx) (x^n) = nx^(n -1) for all positive integers n.
Q 20 | Page 192
Using the fact that sin (A + B) = sin A cos B + cos A sin B and the differentiation, obtain the sum formula for cosines
Q 21 | Page 192
Does there exist a function which is continuos everywhere but not differentiable at exactly two points? Justify your answer ?
Q 22 | Page 192
if y = [(f(x), g(x), h(x)),(l, m,n),(a,b,c)], prove that dy/dx =|(f'(x), g'(x), h'(x)),(l,m, n),(a,b,c)|
Q 23 | Page 192
if y = e^(acos^(-1)x), -1 <= x <= 1 show that (1- x^2) (d^2y)/(dx^2) -x dy/dx - a^2y = 0
## NCERT solutions for Class 12 Mathematics chapter 5 - Continuity and Differentiability
NCERT solutions for Class 12 Maths chapter 5 (Continuity and Differentiability) include all questions with solution and detail explanation. This will clear students doubts about any question and improve application skills while preparing for board exams. The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. Shaalaa.com has the CBSE Mathematics Textbook for Class 12 solutions in a manner that help students grasp basic concepts better and faster.
Further, we at Shaalaa.com are providing such solutions so that students can prepare for written exams. NCERT textbook solutions can be a core help for self-study and acts as a perfect self-help guidance for students.
Concepts covered in Class 12 Mathematics chapter 5 Continuity and Differentiability are Higher Order Derivative, Algebra of Continuous Functions, Derivative - Exponential and Log, Concept of Differentiability, Proof Derivative X^n Sin Cos Tan, Infinite Series, Continuous Function of Point, Mean Value Theorem, Second Order Derivative, Derivatives of Functions in Parametric Forms, Logarithmic Differentiation, Exponential and Logarithmic Functions, Derivatives of Implicit Functions, Derivatives of Inverse Trigonometric Functions, Derivatives of Composite Functions - Chain Rule, Concept of Continuity.
Using NCERT Class 12 solutions Continuity and Differentiability exercise by students are an easy way to prepare for the exams, as they involve solutions arranged chapter-wise also page wise. The questions involved in NCERT Solutions are important questions that can be asked in the final exam. Maximum students of CBSE Class 12 prefer NCERT Textbook Solutions to score more in exam.
Get the free view of chapter 5 Continuity and Differentiability Class 12 extra questions for Maths and can use Shaalaa.com to keep it handy for your exam preparation
S
|
2019-10-23 21:54:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7075628638267517, "perplexity": 3851.4639510323755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836295.98/warc/CC-MAIN-20191023201520-20191023225020-00270.warc.gz"}
|
https://www.physicsforums.com/threads/arnold-odes-definitions.469477/
|
# Arnold ODEs, definitions
Rasalhague
I'm reading Arnold: Ordinary Differential Equations, Chapter 1. In section 1.2, an integral curve was defined as the graph, in the extended phase space, $\mathbb{R} \times M$, of the motion $\phi : \mathbb{R} \rightarrow M$ of a phase point in M. In 2.2, an integral curve is defined as the graph of a solution, $\phi : I \rightarrow U$, to a differential equation $\dot{x} = \mathbf{v}(x)$, where I and U are open intervals.
Now the extended phase space is said to be "a strip $\mathbb{R} \times U$ in the direct product of the t-axis and the x-axis". Why is it not a rectangle $I \times U$? What if $I \neq \mathbb{R}$?
I see the Wikipedia article Dynamical systems, in defining a dynamical system in general, makes the domain of the evolution function a subset of what Arnold calls the "extended phase space", and suggests that I(x) is not necessarily equal to T (in the notation of this page). Is I(x) always equal to T = R for a real dynamical system, a.k.a. flow? And is that why Arnold's extended phase space has to be $\mathbb{R} \times U$ rather than IxU?
Is "the integral curve of a differential equation" (being the graph of a solution) not necessarily defined for all of the extended phase space of the equation, and therefore not an integral curve in the sense of Arnold Ch. 1, section 1.2?
Homework Helper
you have identified a crucial property in differential equations, namely when is the solution defined for all "time"? I agree with your reading that arnol'd has designated the words "phase space", or "one parameter group" for the case where the solution IS defined for all real numbers. Check out sections 3.5 and 3.6 of chapter 1, where he discusses when this may not happen. as i recall it holds when the manifold is compact, and maybe the equation is linear?? I am not an expert, but it is usual for different authors to make their own conventions as to the use of language. Whatever they call it, it is important to know when the solution is defined for all t.
Rasalhague
Thanks for the pointer, mathwonk. In sections 3.4 and 3.5, he says that not every differential equation on the line has an associated one-parameter group (=phase flow). In 3.6, he says the reason there is no phase flow in the case of the example in 3.5 is that the t-advance mappings gt are not defined for all x, that is, I think, the domain is not the whole of R for all (any?) of them. And yes, he says in 3.6, that "every differentiable velocity field on a compact manifold is the phase velocity field of a one-parameter group of diffeomorphisms. The example he gave in 3.5 was nonlinear in x. 3.3 talks about linearity--but I'll hold off paraphrasing for now till I've got some of these definitions straight in my head.
A couple of incidental ponderings on glancing ahead to section 3:
In section 3, "the phase flow associated with a differential equation" is the family (i.e. set?) of mappings {gt : t is in R} : M --> M, where M is a set called the phase space. In section 1, a phase flow was a tuple (M,{gt : t is in R}). I thought when I read section 1 that this definition seemed a bit superfluous, since M is already part of the definition of each gt. Maybe this is why the M has been dropped in section 3.
Wikipedia defines a "one-parameter group" as a continuous group homomorphism from R (with addition) to G, where G is the underlying set of another topological group. It says that a "one-parameter group", so defined, is not a group. But Arnold's one-parameter group maps a set (the phase space) to itself, and composition follows the same rules as addition of real numbers, which makes me think that Arnold's one-paramater group is indeed a group.
|
2022-08-10 01:37:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8734980821609497, "perplexity": 345.28593527834647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00781.warc.gz"}
|
http://zbmath.org/?q=an:1115.11002&format=complete
|
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Andrzej Schinzel selecta. Volume I: Diophantine problems and polynomials. Volume II: Elementary, analytic and geometric number theory. Edited by Henryk Iwaniec, Władysław Narkiewicz and Jerzy Urbanowicz. (English) Zbl 1115.11002
Heritage of European Mathematics. Zürich: European Mathematical Society Publishing House (EMS) (ISBN 978-3-03719-038-8/hbk). xiv, 1393 p. EUR 168.00 (2007).
Nobody is able to present seriously all the works contained in these two volumes except A. Schinzel himself.
A. Schinzel wrote his first paper at the age of 17: “Sur la décomposition des nombres naturels en somme de nombres triangulaires distincts”, Bull. Acad. Polon. Sci., 1954. Most of his first papers were much influenced by his supervisor W. Sierpiński. The central theme of Schinzel’s work is arithmetical and algebraic properties of polynomials in one or several variables, in particular questions of irreducibility and zeros of polynomials; this concerns about one third of his papers and he wrote two books on this subject, each containing a lot of old and new information.
The selection presented in these two volumes contains 100 papers chosen among more than 200 papers published by Schinzel, and also a list of unsolved problems and unproved conjectures proposed by Schinzel in the years 1956–2006.
This collection is organized into 13 sections, each theme being presented and commented by an expert. I present this list below and for each section I select one result of Schinzel. My main criterion is simplicity and elegance and the second one is novelty and originality, many of these results were in advance when they appeared. But I note that most of the deepest results of Schinzel are very long to state, they contain long lists of explicity special cases and it was impossible to reproduce them here.
A. “Diophantine equations and integral forms”, commented by R. Tijdeman – 10 papers.
Reference: “On the equation ${y}^{m}=P\left(x\right)$” [with R. Tijdeman], Acta Arith. (1976). – If a polynomial $P\left(x\right)$ with rational coefficients has at least two distinct zeros then the equation $\phantom{\rule{0.166667em}{0ex}}{y}^{m}=P\left(x\right)$, where $x$ and $y$ are integers, $|y|>1$, implies $\phantom{\rule{0.166667em}{0ex}}m (effective).
B. “Continued fractions and integral forms”, commented by E. Dubois – 3 papers.
Reference: “On some problems of the arithmetical theory of continued fractions”, Acta Arith. (1961). – For a given quadratic surd $\xi$ let $\text{lp}\phantom{\rule{0.166667em}{0ex}}\xi$ be the length of the shortest period of the continued fraction expansion of $\xi$, and let $f\left(x\right)$ be a polynomial with integer coefficients, degree $d$, and positive leading coefficient $a$, then, if $d$ is odd or if $d$ is even and $a$ is not a square,
$lim sup\text{lp}\phantom{\rule{0.166667em}{0ex}}\sqrt{f\left(n\right)}=\infty ·$
C. “Algebraic number theory”, commented by D. W. Boyd and D. J. Lewis – 10 papers.
Reference: “On the product on the conjugates outside of the unit circle of an algebraic number”, Acta Arith. (1973). – The main result implies: Let $K$ be a CM field (i.e., a number field which is either totally real or a totally complex quadratic extension of such a field) of degree $n$ and let $P\in K\left[X\right]$ be a polynomial of degree $d$ such that ${X}^{d}\overline{P}\left(1/X\right)\ne \text{constant}·P\left(X\right)$ then
$\prod _{i=1}^{n}\prod _{|{\alpha }_{i,j}|>1}|{\alpha }_{i,j}|\ge {\left(\frac{1+\sqrt{5}}{2}\right)}^{n/2},$
where the ${\alpha }_{i,j}$ are the roots of the conjugates ${P}^{\left(j\right)}$ of $P$, $j=1$, ..., $n$.
D. “Polynomials in one variable”, commented by M. Filaseta – 17 papers.
Reference: “Reducibility of lacunary polynomials II”, Acta Arith. (1970). – For any polynomial $f$ with integer coefficients there exist infinitely many irreducible polynomials $g$ with integer coefficients such that
$\parallel f-g\parallel \le 3,$
where $\parallel P\parallel$ denotes the sum of the squares of the coefficients of a polynomial $P$.
E. “Polynomials in several variables”, commented by U. Zannier – 10 papers.
Reference: “Reducibility of polynomials of the form $f\left(x\right)-g\left(y\right)$”, Colloq. Math (1967). – Let $f$ and $g$ be non-constant polynomials with rational coefficients and let the degree of $f$ be a prime $p$. Then $f\left(x\right)-g\left(y\right)$ is reducible over the complex field if and only if $g\left(y\right)=f\left(c\left(y\right)\right)$ and either $c$ has rational coefficients or
$f\left(x\right)-g\left(y\right)=A{\left(x+\alpha \right)}^{p}-Bd{\left(y\right)}^{p},$
where $d$ has rational coefficients and $A$, $B$ and $\alpha$ are rationals.
F. “Hilbert Irreducibility Theorem”, commented by U. Zannier – 3 papers.
Reference: “The least admissible value of the parameter in Hilbert Irreducibility Theorem”, [with U. Zannier], Acta Arith. (1995). – Let ${F}_{1}$, ..., ${F}_{h}$ be irreducible polynomials in $ℚ\left[t,x\right]$ such that $deg{F}_{i}\le D$ and the height of each ${F}_{i}$ is at most $H$, then there exists a rational number ${t}^{*}=u/v$ such that each ${F}_{i}\left({t}^{*},x\right)$ is irreducible over $ℚ$ and
$max\left\{|u|,|v|\right\}\le exp\left({10}^{10}{D}^{100\phantom{\rule{0.166667em}{0ex}}h{D}^{2}logD}\left(1+{log}^{2}H\right)\right)·$
G. “Arithmetic functions”, commented by K. Ford – 6 papers.
Reference: “Sur l’équation $\varphi \left(x\right)=m$”, Elemente der Math. (1956). – For any positive integer $n$, there exist infinitely many rational integers $m$ which are multiples of $n$ and such that the equation $\varphi \left(x\right)=m$ has no solution.
H. “Divisibility and congruences”, commented by H.W. Lenstra jun. – 11 papers.
Reference: “On the congruence ${a}^{x}\equiv b\phantom{\rule{4.44443pt}{0ex}}\left(mod\phantom{\rule{0.277778em}{0ex}}p\right)$”, Bull. Acad. Pol. Sci. (1960). – If $a$ and $b$ are rational integers, $a>0$ and $b\ne {a}^{k}$ ($k$ – rational integer), then there exist infinitely many prime numbers $p$ for which the congruence ${a}^{x}\equiv b\phantom{\rule{4.44443pt}{0ex}}\left(mod\phantom{\rule{0.277778em}{0ex}}p\right)$ has no solution in rational integers $x$.
I. “Primitive divisors”, commented by C. L. Stewart. – 6 papers.
Reference: “On primitive factors of Lehmer numbers II”, Acta Arith. (1963). – Let $L$ and $M$ be coprime rational integers, suppose that $K=L-4M$ is non-zero, let $\alpha$ and $\beta$ be the roots of the trinomial ${z}^{2}-{L}^{1/2}z+M$ and put
${P}_{n}=\left\{\begin{array}{cc}\left({\alpha }^{n}-{\beta }^{n}\right)/\left(\alpha -\beta \right),\hfill & \text{for}\phantom{\rule{4.pt}{0ex}}n\phantom{\rule{4.pt}{0ex}}\text{odd},\hfill \\ \left({\alpha }^{n}-{\beta }^{n}\right)/\left({\alpha }^{2}-{\beta }^{2}\right),\hfill & \text{for}\phantom{\rule{4.pt}{0ex}}n\phantom{\rule{4.pt}{0ex}}\text{even}·\hfill \end{array}\right\$
Let $e=3$, 4 or 6. If ${L}^{1/2}$ is rational, ${K}^{1/2}$ is an irrational integer of the field $ℚ\left(\xi \right)$, $K$ is divisble by the cube of the discriminant of the field, ${\kappa }_{e}={k}_{e}\left(M\right)$ is square-free [where for a positive rational integer $x$ the number ${k}_{e}\left(x\right)$ is equal to $x$ divided by the greatest $e$th power dividing it],
${\eta }_{e}=\left\{\begin{array}{cc}2,\hfill & \text{if}\phantom{\rule{4.pt}{0ex}}e=6,\phantom{\rule{4pt}{0ex}}M\equiv 3\phantom{\rule{10.0pt}{0ex}}\left(mod\phantom{\rule{0.277778em}{0ex}}4\right),\hfill \\ 1,\hfill & \text{otherwise},\hfill \end{array}\right\$
and $n/\left({\eta }_{e}{\kappa }_{e}\right)$ is an integer relatively prime to $e$, then for $n>{n}_{e}\left(L,M\right)$ (effectively computable), ${P}_{n}$ has at least $e$ primitive factors.
J. “Prime numbers”, commented by J. Kaczorowski. – 5 papers.
Reference: “On two theorems of Gelfond and some of their applications”, Acta Arith. (1967). – If $f\left(x\right)$ is any quadratic polynomial without a double root then
$\underset{x\to \infty }{lim inf}\frac{\text{P}\left(f\left(x\right)\right)}{loglogx}\ge \left\{\begin{array}{cc}4/7,\hfill & \text{if}\phantom{\rule{4.pt}{0ex}}f\phantom{\rule{4.pt}{0ex}}\text{is}\phantom{\rule{4.pt}{0ex}}\text{irreducible},\hfill \\ 2/7,\hfill & \text{if}\phantom{\rule{4.pt}{0ex}}f\phantom{\rule{4.pt}{0ex}}\text{is}\phantom{\rule{4.pt}{0ex}}\text{reducible},\hfill \end{array}\right\$
where $\text{P}\left(x\right)$ denotes the greatest prime factor of a non-zero rational integer.
K. “Analytic number theory”, commented by J. Kaczorowski. – 4 papers.
Reference: “On an analytic problem considered by Sierpiński and Ramanujan”, in New trends in Probability and Statistics, v. 2, Analytic and Probabilistic Methods in Number Theory (1992). – Let $r\left(n\right)$ be the number of representations of a positive integer $n$ as a sum of two squares, then
$\sum _{n\le x}{r}^{2}\left(n\right)=4\phantom{\rule{0.166667em}{0ex}}xlogx+cx+{\Omega }\left({x}^{3/8}\right)·$
[Note: Sierpiński had proved that $\phantom{\rule{0.166667em}{0ex}}{\sum }_{n\le x}{r}^{2}\left(n\right)=4\phantom{\rule{0.166667em}{0ex}}xlogx+cx+O\left({x}^{3/4}logx\right)$ in 1906, and this is the first “${\Omega }$” result on this problem.]
L. “Geometry of numbers”, commented by W. M. Schmidt. – 4 papers.
Reference: “A decomposition of integer vectors”, [with S. Chaładus] PLISKA Stud. Mat. Bulgarica (1991). – For a vector $𝐧=\left({n}_{1},\cdots ,{n}_{k}\right)$ put $h\left(𝐧\right)=max|{n}_{i}|$. Then for any non-zero vector $𝐧=\left({n}_{1},{n}_{2},{n}_{3}\right)$ of rational integers there exist independent vectors $𝐩$ and $𝐪$ in ${ℤ}^{3}$ such that $𝐧=u𝐩+v𝐪$, with $u$, $v\in ℤ$ and
$h\left(𝐩\right)·h\left(𝐪\right)<\sqrt{\frac{4}{3}h\left(𝐧\right)}·$
M. “Other papers”, commented by S. Kwapień. – 5 papers.
Reference: “An inequality for determinants with real entries”, Colloq. Math. (1978). – For every matrix $A={\left({a}_{ij}\right)}_{i,j\le n}$ with real entries we have the inequality
$|det\left(A\right)|\le \prod _{i=1}^{n}max\left\{\sum _{1\le j\le n,\phantom{\rule{4pt}{0ex}}{a}_{ij}>0}{a}_{ij},-\sum _{1\le j\le n,\phantom{\rule{4pt}{0ex}}{a}_{ij}<0}{a}_{ij}\right\}·$
Conjectures. – We end this list by a very famous conjecture of Schinzel (1958), “conjecture H”: If ${f}_{1}$, ..., ${f}_{k}$ are irreducible univariate polynomials with integer coefficients and positive leading coefficient such that the product ${f}_{1}\left(x\right)\cdots {f}_{k}\left(x\right)$ has no fixed divisor $>1$, then there exist infinitely positive integers $x$ such that all the numbers ${f}_{i}\left(x\right)$, $1\le i\le k$, are primes.
I hope that this enumeration will show the reader the extraordinary variety of Schinzel’s works and let him guess the incredible amount of information contained in these two volumes.
##### MSC:
11-03 Historical (number theory) 01A75 Collected or selected works 12-03 Historical (field theory)
|
2014-04-19 01:50:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 120, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8225374221801758, "perplexity": 2455.437178490235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://www.nag.co.uk/numeric/MB/manual64_25_1/html/F07/f07intro.html
|
Integer type: int32 int64 nag_int show int32 show int32 show int64 show int64 show nag_int show nag_int
Chapter Contents
Chapter Introduction
NAG Toolbox
# NAG Toolbox Chapter IntroductionF07 — linear equations (lapack)
## Scope of the Chapter
This chapter provides functions for the solution of systems of simultaneous linear equations, and associated computations. It provides functions for
• matrix factorizations;
• solution of linear equations;
• estimating matrix condition numbers;
• computing error bounds for the solution of linear equations;
• matrix inversion;
• computing scaling factors to equilibrate a matrix.
Functions are provided for both real and complex data.
For a general introduction to the solution of systems of linear equations, you should turn first to the F04 Chapter Introduction. The decision trees, in Decision Trees in the F04 Chapter Introduction, direct you to the most appropriate functions in Chapters F04 or F07 for solving your particular problem. In particular, Chapters F04 and F07 contain Black Box (or driver) functions which enable some standard types of problem to be solved by a call to a single function. Where possible, functions in Chapter F04 call Chapter F07 functions to perform the necessary computational tasks.
There are two types of driver functions in this chapter: simple drivers which just return the solution to the linear equations; and expert drivers which also return condition and error estimates and, in many cases, also allow equilibration. The simple drivers for real matrices have names of the form f07_a and for complex matrices have names of the form f07_n. The expert drivers for real matrices have names of the form f07_b and for complex matrices have names of the form f07_p.
The functions in this chapter (Chapter F07) handle only dense and band matrices (not matrices with more specialised structures, or general sparse matrices).
The functions in this chapter have all been derived from the LAPACK project (see Anderson et al. (1999)). They have been designed to be efficient on a wide range of high-performance computers, without compromising efficiency on conventional serial machines.
## Background to the Problems
This section is only a brief introduction to the numerical solution of systems of linear equations. Consult a standard textbook, for example Golub and Van Loan (1996) for a more thorough discussion.
### Notation
We use the standard notation for a system of simultaneous linear equations:
$Ax=b$ (1)
where $A$ is the coefficient matrix, $b$ is the right-hand side, and $x$ is the solution. $A$ is assumed to be a square matrix of order $n$.
If there are several right-hand sides, we write
$AX=B$ (2)
where the columns of $B$ are the individual right-hand sides, and the columns of $X$ are the corresponding solutions.
We also use the following notation, both here and in the function documents:
$\stackrel{^}{x}$ a computed solution to $Ax=b$, (which usually differs from the exact solution $x$ because of round-off error) $r=b-A\stackrel{^}{x}$ the residual corresponding to the computed solution $\stackrel{^}{x}$ ${‖x‖}_{\infty }=\underset{i}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}\left|{x}_{i}\right|$ the $\infty$-norm of the vector $x$ ${‖x‖}_{1}=\sum _{j=1}^{n}\left|{x}_{j}\right|$ the $1$-norm of the vector $x$ ${‖A‖}_{\infty }=\underset{i}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}\sum _{j}\phantom{\rule{0.25em}{0ex}}\left|{a}_{ij}\right|$ the $\infty$-norm of the matrix $A$ ${‖A‖}_{1}=\underset{j}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}\sum _{i=1}^{n}\left|{a}_{ij}\right|$ the $1$-norm of the matrix $A$ $\left|x\right|$ the vector with elements $\left|{x}_{i}\right|$ $\left|A\right|$ the matrix with elements $\left|{a}_{ij}\right|$
Inequalities of the form $\left|A\right|\le \left|B\right|$ are interpreted component-wise, that is $\left|{a}_{ij}\right|\le \left|{b}_{ij}\right|$ for all $i,j$.
### Matrix Factorizations
If $A$ is upper or lower triangular, $Ax=b$ can be solved by a straightforward process of backward or forward substitution.
Otherwise, the solution is obtained after first factorizing $A$, as follows.
General matrices (LU factorization with partial pivoting)
$A=PLU$
where $P$ is a permutation matrix, $L$ is lower-triangular with diagonal elements equal to $1$, and $U$ is upper-triangular; the permutation matrix $P$ (which represents row interchanges) is needed to ensure numerical stability.
Symmetric positive definite matrices (Cholesky factorization)
$A=UTU or A=LLT$
where $U$ is upper triangular and $L$ is lower triangular.
Symmetric positive semidefinite matrices (pivoted Cholesky factorization)
$A=PUTUPT or A=PLLTPT$
where $P$ is a permutation matrix, $U$ is upper triangular and $L$ is lower triangular. The permutation matrix $P$ (which represents row-and-column interchanges) is needed to ensure numerical stability and to reveal the numerical rank of $A$.
Symmetric indefinite matrices (Bunch–Kaufman factorization)
$A = PUD UT PT or A = PLD LT PT$
where $P$ is a permutation matrix, $U$ is upper triangular, $L$ is lower triangular, and $D$ is a block diagonal matrix with diagonal blocks of order $1$ or $2$; $U$ and $L$ have diagonal elements equal to $1$, and have $2$ by $2$ unit matrices on the diagonal corresponding to the $2$ by $2$ blocks of $D$. The permutation matrix $P$ (which represents symmetric row-and-column interchanges) and the $2$ by $2$ blocks in $D$ are needed to ensure numerical stability. If $A$ is in fact positive definite, no interchanges are needed and the factorization reduces to $A=UD{U}^{\mathrm{T}}$ or $A=LD{L}^{\mathrm{T}}$ with diagonal $D$, which is simply a variant form of the Cholesky factorization.
### Solution of Systems of Equations
Given one of the above matrix factorizations, it is straightforward to compute a solution to $Ax=b$ by solving two subproblems, as shown below, first for $y$ and then for $x$. Each subproblem consists essentially of solving a triangular system of equations by forward or backward substitution; the permutation matrix $P$ and the block diagonal matrix $D$ introduce only a little extra complication:
General matrices ( LU factorization)
$Ly=PTb Ux=y$
Symmetric positive definite matrices (Cholesky factorization)
$UTy=b Ux=y or Ly=b LTx=y$
Symmetric indefinite matrices (Bunch–Kaufman factorization)
$PUDy=b UTPTx=y or PLDy=b LTPTx=y$
### Sensitivity and Error Analysis
#### Normwise error bounds
Frequently, in practical problems the data $A$ and $b$ are not known exactly, and it is then important to understand how uncertainties or perturbations in the data can affect the solution.
If $x$ is the exact solution to $Ax=b$, and $x+\delta x$ is the exact solution to a perturbed problem $\left(A+\delta A\right)\left(x+\delta x\right)=\left(b+\delta b\right)$, then
$δx x ≤κA δA A +δb b +⋯second-order terms$
where $\kappa \left(A\right)$ is the condition number of $A$ defined by
$κA = A.A-1 .$ (3)
In other words, relative errors in $A$ or $b$ may be amplified in $x$ by a factor $\kappa \left(A\right)$. Estimating condition numbers discusses how to compute or estimate $\kappa \left(A\right)$.
Similar considerations apply when we study the effects of rounding errors introduced by computation in finite precision. The effects of rounding errors can be shown to be equivalent to perturbations in the original data, such that $\frac{‖\delta A‖}{‖A‖}$ and $\frac{‖\delta b‖}{‖b‖}$ are usually at most $p\left(n\right)\epsilon$, where $\epsilon$ is the machine precision and $p\left(n\right)$ is an increasing function of $n$ which is seldom larger than $10n$ (although in theory it can be as large as ${2}^{n-1}$).
In other words, the computed solution $\stackrel{^}{x}$ is the exact solution of a linear system $\left(A+\delta A\right)\stackrel{^}{x}=b+\delta b$ which is close to the original system in a normwise sense.
#### Estimating condition numbers
The previous section has emphasized the usefulness of the quantity $\kappa \left(A\right)$ in understanding the sensitivity of the solution of $Ax=b$. To compute the value of $\kappa \left(A\right)$ from equation (3) is more expensive than solving $Ax=b$ in the first place. Hence it is standard practice to estimate $\kappa \left(A\right)$, in either the $1$-norm or the $\infty$-norm, by a method which only requires $\mathit{O}\left({n}^{2}\right)$ additional operations, assuming that a suitable factorization of $A$ is available.
The method used in this chapter is Higham's modification of Hager's method (see Higham (1988)). It yields an estimate which is never larger than the true value, but which seldom falls short by more than a factor of $3$ (although artificial examples can be constructed where it is much smaller). This is acceptable since it is the order of magnitude of $\kappa \left(A\right)$ which is important rather than its precise value.
Because $\kappa \left(A\right)$ is infinite if $A$ is singular, the functions in this chapter actually return the reciprocal of $\kappa \left(A\right)$.
#### Scaling and Equilibration
The condition of a matrix and hence the accuracy of the computed solution, may be improved by scaling; thus if ${D}_{1}$ and ${D}_{2}$ are diagonal matrices with positive diagonal elements, then
$B = D1 A D2$
is the scaled matrix. A general matrix is said to be equilibrated if it is scaled so that the lengths of its rows and columns have approximately equal magnitude. Similarly a general matrix is said to be row-equilibrated (column-equilibrated) if it is scaled so that the lengths of its rows (columns) have approximately equal magnitude. Note that row scaling can affect the choice of pivot when partial pivoting is used in the factorization of $A$.
A symmetric or Hermitian positive definite matrix is said to be equilibrated if the diagonal elements are all approximately equal to unity.
For further information on scaling and equilibration see Section 3.5.2 of Golub and Van Loan (1996), Section 7.2, 7.3 and 9.8 of Higham (1988) and Section 5 of Chapter 4 of Wilkinson (1965).
Functions are provided to return the scaling factors that equilibrate a matrix for general, general band, symmetric and Hermitian positive definite and symmetric and Hermitian positive definite band matrices.
#### Componentwise error bounds
A disadvantage of normwise error bounds is that they do not reflect any special structure in the data $A$ and $b$ – that is, a pattern of elements which are known to be zero – and the bounds are dominated by the largest elements in the data.
Componentwise error bounds overcome these limitations. Instead of the normwise relative error, we can bound the relative error in each component of $A$ and $b$:
$maxijk δaij aij , δbk bk ≤ω$
where the component-wise backward error bound $\omega$ is given by
$ω= maxi ri A. x^+bi .$
Functions are provided in this chapter which compute $\omega$, and also compute a forward error bound which is sometimes much sharper than the normwise bound given earlier:
$x- x^ ∞ x∞ ≤ A-1 . r ∞ x∞ .$
Care is taken when computing this bound to allow for rounding errors in computing $r$. The norm ${‖\left|{A}^{-1}\right|.\left|r\right|‖}_{\infty }$ is estimated cheaply (without computing ${A}^{-1}$) by a modification of the method used to estimate $\kappa \left(A\right)$.
#### Iterative refinement of the solution
If $\stackrel{^}{x}$ is an approximate computed solution to $Ax=b$, and $r$ is the corresponding residual, then a procedure for iterative refinement of $\stackrel{^}{x}$ can be defined as follows, starting with ${x}_{0}=\stackrel{^}{x}$:
• for $i=0,1,\dots \text{}$, until convergence
compute ${r}_{i}=b-A{x}_{i}$ solve $A{d}_{i}={r}_{i}$ compute ${x}_{i+1}={x}_{i}+{d}_{i}$
In Chapter F04, functions are provided which perform this procedure using additional precision to compute $r$, and are thus able to reduce the forward error to the level of machine precision.
The functions in this chapter do not use additional precision to compute $r$, and cannot guarantee a small forward error, but can guarantee a small backward error (except in rare cases when $A$ is very ill-conditioned, or when $A$ and $x$ are sparse in such a way that $\left|A\right|.\left|x\right|$ has a zero or very small component). The iterations continue until the backward error has been reduced as much as possible; usually only one iteration is needed.
### Matrix Inversion
It is seldom necessary to compute an explicit inverse of a matrix. In particular, do not attempt to solve $Ax=b$ by first computing ${A}^{-1}$ and then forming the matrix-vector product $x={A}^{-1}b$; the procedure described in Solution of Systems of Equations is more efficient and more accurate.
However, functions are provided for the rare occasions when an inverse is needed, using one of the factorizations described in Matrix Factorizations.
### Packed Storage Formats
Functions which handle symmetric matrices are usually designed so that they use either the upper or lower triangle of the matrix; it is not necessary to store the whole matrix. If the upper or lower triangle is stored conventionally in the upper or lower triangle of a two-dimensional array, the remaining elements of the array can be used to store other useful data.
However, that is not always convenient, and if it is important to economize on storage, the upper or lower triangle can be stored in a one-dimensional array of length $n\left(n+1\right)/2$ or a two-dimensional array with $n\left(n+1\right)/2$ elements; in other words, the storage is almost halved.
The one-dimensional array storage format is referred to as packed storage; it is described in Packed storage. The two-dimensional array storage format is referred to as Rectangular Full Packed (RFP) format; it is described in Rectangular Full Packed (RFP) Storage. They may also be used for triangular matrices.
Functions designed for these packed storage formats perform the same number of arithmetic operations as functions which use conventional storage. Those using a packed one-dimensional array are usually less efficient, especially on high-performance computers, so there is then a trade-off between storage and efficiency. The RFP functions are as efficient as for conventional storage, although only a small subset of functions use this format.
### Band and Tridiagonal Matrices
A band matrix is one whose nonzero elements are confined to a relatively small number of subdiagonals or superdiagonals on either side of the main diagonal. A tridiagonal matrix is a special case of a band matrix with just one subdiagonal and one superdiagonal. Algorithms can take advantage of bandedness to reduce the amount of work and storage required. The storage scheme used for band matrices is described in Band storage.
The $LU$ factorization for general matrices, and the Cholesky factorization for symmetric and Hermitian positive definite matrices both preserve bandedness. Hence functions are provided which take advantage of the band structure when solving systems of linear equations.
The Cholesky factorization preserves bandedness in a very precise sense: the factor $U$ or $L$ has the same number of superdiagonals or subdiagonals as the original matrix. In the $LU$ factorization, the row-interchanges modify the band structure: if $A$ has ${k}_{l}$ subdiagonals and ${k}_{u}$ superdiagonals, then $L$ is not a band matrix but still has at most ${k}_{l}$ nonzero elements below the diagonal in each column; and $U$ has at most ${k}_{l}+{k}_{u}$ superdiagonals.
The Bunch–Kaufman factorization does not preserve bandedness, because of the need for symmetric row-and-column permutations; hence no functions are provided for symmetric indefinite band matrices.
The inverse of a band matrix does not in general have a band structure, so no functions are provided for computing inverses of band matrices.
### Block Partitioned Algorithms
Many of the functions in this chapter use what is termed a block partitioned algorithm. This means that at each major step of the algorithm a block of rows or columns is updated, and most of the computation is performed by matrix-matrix operations on these blocks. The matrix-matrix operations are performed by calls to the Level 3 BLAS which are the key to achieving high performance on many modern computers. See Golub and Van Loan (1996) or Anderson et al. (1999) for more about block partitioned algorithms.
The performance of a block partitioned algorithm varies to some extent with the block size – that is, the number of rows or columns per block. This is a machine-dependent argument, which is set to a suitable value when the library is implemented on each range of machines. You do not normally need to be aware of what value is being used. Different block sizes may be used for different functions. Values in the range $16$ to $64$ are typical.
On some machines there may be no advantage from using a block partitioned algorithm, and then the functions use an unblocked algorithm (effectively a block size of $1$), relying solely on calls to the Level 2 BLAS again).
### Mixed Precision LAPACK Routines
Some LAPACK routines use mixed precision arithmetic in an effort to solve problems more efficiently on modern hardware. They work by converting a double precision problem into an equivalent single precision problem, solving it and then using iterative refinement in double precision to find a full precision solution to the original problem. The method may fail if the problem is too ill-conditioned to allow the initial single precision solution, in which case the functions fall back to solve the original problem entirely in double precision. The vast majority of problems are not so ill-conditioned, and in those cases the technique can lead to significant gains in speed without loss of accuracy. This is particularly true on machines where double precision arithmetic is significantly slower than single precision.
## Recommendations on Choice and Use of Available Functions
### Available Functions
Tables 1 to 8 in Tables of Driver and Computational Routines show the functions which are provided for performing different computations on different types of matrices. Tables 1 to 4 show functions for real matrices; Tables 5 to 8 show functions for complex matrices. Each entry in the table gives the NAG function name and the LAPACK double precision name.
Functions are provided for the following types of matrix:
• general
• general band
• general tridiagonal
• symmetric or Hermitian positive definite
• symmetric or Hermitian positive definite (packed storage)
• symmetric or Hermitian positive definite (RFP storage)
• symmetric or Hermitian positive definite band
• symmetric or Hermitian positive definite tridiagonal
• symmetric or Hermitian indefinite
• symmetric or Hermitian indefinite (packed storage)
• triangular
• triangular (packed storage)
• triangular (RFP storage)
• triangular band
For each of the above types of matrix (except where indicated), functions are provided to perform the following computations:
(a) (except for RFP matrices) solve a system of linear equations (driver functions); (b) (except for RFP matrices) solve a system of linear equations with condition and error estimation (expert drivers); (c) (except for triangular matrices) factorize the matrix (see Matrix Factorizations); (d) solve a system of linear equations, using the factorization (see Solution of Systems of Equations); (e) (except for RFP matrices) estimate the condition number of the matrix, using the factorization (see Estimating condition numbers); these functions also require the norm of the original matrix (except when the matrix is triangular) which may be computed by a function in (f) (except for RFP matrices) refine the solution and compute forward and backward error bounds (see Componentwise error bounds and Iterative refinement of the solution); these functions require the original matrix and right-hand side, as well as the factorization returned from (a) and the solution returned from (b); (g) (except for band and tridiagonal matrices) invert the matrix, using the factorization (see Matrix Inversion); (h) (except for tridiagonal, symmetric indefinite, triangular and RFP matrices) compute scale factors to equilibrate the matrix (see Scaling and Equilibration).
Thus, to solve a particular problem, it is usually only necessary to call a single driver function, but alternatively two or more functions may be called in succession. This is illustrated in the example programs in the function documents.
### Matrix Storage Schemes
In this chapter the following different storage schemes are used for matrices:
• – conventional storage in a two-dimensional array;
• – packed storage for symmetric, Hermitian or triangular matrices;
• – rectangular full packed (RFP) storage for symmetric, Hermitian or triangular matrices;
• – band storage for band matrices.
In the examples below, $*$ indicates an array element which need not be set and is not referenced by the functions. .
#### Conventional storage
The default scheme for storing matrices is the obvious one: a matrix $A$ is stored in a two-dimensional array a, with matrix element ${a}_{ij}$ stored in array element $\mathrm{a}\left(i,j\right)$.
If a matrix is triangular (upper or lower, as specified by the argument uplo), only the elements of the relevant triangle are stored; the remaining elements of the array need not be set. Such elements are indicated by * or $⌴$ in the examples below.
For example, when $n=4$:
uplo Triangular matrix $\mathbit{A}$ Storage in array a 'U' $\left(\begin{array}{llll}{a}_{11}& {a}_{12}& {a}_{13}& {a}_{14}\\ & {a}_{22}& {a}_{23}& {a}_{24}\\ & & {a}_{33}& {a}_{34}\\ & & & {a}_{44}\end{array}\right)$ $\begin{array}{cccc}{a}_{11}& {a}_{12}& {a}_{13}& {a}_{14}\\ \text{⌴}& {a}_{22}& {a}_{23}& {a}_{24}\\ \text{⌴}& \text{⌴}& {a}_{33}& {a}_{34}\\ \text{⌴}& \text{⌴}& \text{⌴}& {a}_{44}\end{array}$ 'L' $\left(\begin{array}{llll}{a}_{11}& & & \\ {a}_{21}& {a}_{22}& & \\ {a}_{31}& {a}_{32}& {a}_{33}& \\ {a}_{41}& {a}_{42}& {a}_{43}& {a}_{44}\end{array}\right)$ $\begin{array}{cccc}{a}_{11}& \text{⌴}& \text{⌴}& \text{⌴}\\ {a}_{21}& {a}_{22}& \text{⌴}& \text{⌴}\\ {a}_{31}& {a}_{32}& {a}_{33}& \text{⌴}\\ {a}_{41}& {a}_{42}& {a}_{43}& {a}_{44}\end{array}$
Functions which handle symmetric or Hermitian matrices allow for either the upper or lower triangle of the matrix (as specified by uplo) to be stored in the corresponding elements of the array; the remaining elements of the array need not be set.
For example, when $n=4$:
uplo Hermitian matrix $\mathbit{A}$ Storage in array a 'U' $\left(\begin{array}{llll}{a}_{11}& {a}_{12}& {a}_{13}& {a}_{14}\\ {\stackrel{-}{a}}_{12}& {a}_{22}& {a}_{23}& {a}_{24}\\ {\stackrel{-}{a}}_{13}& {\stackrel{-}{a}}_{23}& {a}_{33}& {a}_{34}\\ {\stackrel{-}{a}}_{14}& {\stackrel{-}{a}}_{24}& {\stackrel{-}{a}}_{34}& {a}_{44}\end{array}\right)$ $\begin{array}{cccc}{a}_{11}& {a}_{12}& {a}_{13}& {a}_{14}\\ \text{⌴}& {a}_{22}& {a}_{23}& {a}_{24}\\ \text{⌴}& \text{⌴}& {a}_{33}& {a}_{34}\\ \text{⌴}& \text{⌴}& \text{⌴}& {a}_{44}\end{array}$ 'L' $\left(\begin{array}{llll}{a}_{11}& {\stackrel{-}{a}}_{21}& {\stackrel{-}{a}}_{31}& {\stackrel{-}{a}}_{41}\\ {a}_{21}& {a}_{22}& {\stackrel{-}{a}}_{32}& {\stackrel{-}{a}}_{42}\\ {a}_{31}& {a}_{32}& {a}_{33}& {\stackrel{-}{a}}_{43}\\ {a}_{41}& {a}_{42}& {a}_{43}& {a}_{44}\end{array}\right)$ $\begin{array}{cccc}{a}_{11}& \text{⌴}& \text{⌴}& \text{⌴}\\ {a}_{21}& {a}_{22}& \text{⌴}& \text{⌴}\\ {a}_{31}& {a}_{32}& {a}_{33}& \text{⌴}\\ {a}_{41}& {a}_{42}& {a}_{43}& {a}_{44}\end{array}$
#### Packed storage
Symmetric, Hermitian or triangular matrices may be stored more compactly, if the relevant triangle (again as specified by uplo) is packed by columns in a one-dimensional array. In this chapter, as in Chapter F08, arrays which hold matrices in packed storage, have names ending in P. For a matrix of order $n$, the array must have at least $n\left(n+1\right)/2$ elements. So:
• if $\mathrm{uplo}=\text{'U'}$, ${a}_{ij}$ is stored in $\mathrm{ap}\left(i+j\left(j-1\right)/2\right)$ for $i\le j$;
• if $\mathrm{uplo}=\text{'L'}$, ${a}_{ij}$ is stored in $\mathrm{ap}\left(i+\left(2n-j\right)\left(j-1\right)/2\right)$ for $j\le i$.
For example:
Triangle of matrix $A$ Packed storage in array ap $\mathrm{uplo}=\text{'U'}$ $\left(\begin{array}{llll}{a}_{11}& {a}_{12}& {a}_{13}& {a}_{14}\\ & {a}_{22}& {a}_{23}& {a}_{24}\\ & & {a}_{33}& {a}_{34}\\ & & & {a}_{44}\end{array}\right)$ ${a}_{11}\underbrace{{a}_{12}{a}_{22}}\underbrace{{a}_{13}{a}_{23}{a}_{33}}\underbrace{{a}_{14}{a}_{24}{a}_{34}{a}_{44}}$ $\mathrm{uplo}=\text{'L'}$ $\left(\begin{array}{llll}{a}_{11}& & & \\ {a}_{21}& {a}_{22}& & \\ {a}_{31}& {a}_{32}& {a}_{33}& \\ {a}_{41}& {a}_{42}& {a}_{43}& {a}_{44}\end{array}\right)$ $\underbrace{{a}_{11}{a}_{21}{a}_{31}{a}_{41}}\underbrace{{a}_{22}{a}_{32}{a}_{42}}\underbrace{{a}_{33}{a}_{43}}{a}_{44}$
Note that for real symmetric matrices, packing the upper triangle by columns is equivalent to packing the lower triangle by rows; packing the lower triangle by columns is equivalent to packing the upper triangle by rows. (For complex Hermitian matrices, the only difference is that the off-diagonal elements are conjugated.)
#### Rectangular Full Packed (RFP) Storage
The rectangular full packed (RFP) storage format offers the same savings in storage as the packed storage format (described in Packed storage), but is likely to be much more efficient in general since the block structure of the matrix is maintained. This structure can be exploited using block partition algorithms (see Block Partitioned Algorithms) in a similar way to matrices that use conventional storage.
Figure 1
Figure 1 gives a graphical representation of the key idea of RFP for the particular case of a lower triangular matrix of even dimensions. In all cases the original triangular matrix of stored elements is separated into a trapezoidal part and a triangular part. The number of columns in these two parts is equal when the dimension of the matrix is even, $n=2k$, while the trapezoidal part has $k+1$ columns when $n=2k+1$. The smaller part is then transposed and fitted onto the trapezoidal part forming a rectangle. The rectangle has dimensions $2k+1$ and $q$, where $q=k$ when $n$ is even and $q=k+1$ when $n$ is odd.
For functions using RFP there is the option of storing the rectangle as described above ($\mathrm{transr}=\text{'N'}$) or its transpose ($\mathrm{transr}=\text{'T'}$, for real a) or its conjugate transpose ($\mathrm{transr}=\text{'C'}$, for complex a).
As an example, we first consider RFP for the case $n=2k$ with $k=3$.
If $\mathrm{transr}=\text{'N'}$, then ar holds a as follows:
• For $\mathrm{uplo}=\text{'U'}$ the upper trapezoid $\mathrm{ar}\left(1:6,1:3\right)$ consists of the last three columns of a upper. The lower triangle $\mathrm{ar}\left(5:7,1:3\right)$ consists of the transpose of the first three columns of a upper.
• For $\mathrm{uplo}=\text{'L'}$ the lower trapezoid $\mathrm{ar}\left(2:7,1:3\right)$ consists of the first three columns of a lower. The upper triangle $\mathrm{ar}\left(1:3,1:3\right)$ consists of the transpose of the last three columns of a lower.
If $\mathrm{transr}=\text{'T'}$, then ar in both uplo cases is just the transpose of ar as defined when $\mathrm{transr}=\text{'N'}$.
uplo Triangle of matrix $\mathbf{A}$ Rectangular Full Packed matrix $\mathbf{AR}$ $\mathrm{transr}=\text{'N'}$ $\mathrm{transr}=\text{'T'}$ 'U' $\left(\begin{array}{llllll}\mathbf{00}& \mathbf{01}& \mathbf{02}& 03& 04& 05\\ & \mathbf{11}& \mathbf{12}& 13& 14& 15\\ & & \mathbf{22}& 23& 24& 25\\ & & & 33& 34& 35\\ & & & & 44& 45\\ & & & & & 55\end{array}\right)$ $\begin{array}{ccc}03& 04& 05\\ 13& 14& 15\\ 23& 24& 25\\ 33& 34& 35\\ \mathbf{00}& 44& 45\\ \mathbf{01}& \mathbf{11}& 55\\ \mathbf{02}& \mathbf{12}& \mathbf{22}\end{array}$ $\begin{array}{ccccccc}03& 13& 23& 33& \mathbf{00}& \mathbf{01}& \mathbf{02}\\ 04& 14& 24& 34& 44& \mathbf{11}& \mathbf{12}\\ 05& 15& 25& 35& 45& 55& \mathbf{22}\end{array}$ 'L' $\left(\begin{array}{l}00\\ 10& 11\\ 20& 21& 22\\ 30& 31& 32& \mathbf{33}\\ 40& 41& 42& \mathbf{43}& \mathbf{44}\\ 50& 51& 52& \mathbf{53}& \mathbf{54}& \mathbf{55}\end{array}\right)$ $\begin{array}{ccc}\mathbf{33}& \mathbf{43}& \mathbf{53}\\ 00& \mathbf{44}& \mathbf{54}\\ 10& 11& \mathbf{55}\\ 20& 21& 22\\ 30& 31& 32\\ 40& 41& 42\\ 50& 51& 52\end{array}$ $\begin{array}{ccccccc}\mathbf{33}& 00& 10& 20& 30& 40& 50\\ \mathbf{43}& \mathbf{44}& 11& 21& 31& 41& 51\\ \mathbf{53}& \mathbf{54}& \mathbf{55}& 22& 32& 42& 52\end{array}$
Now we consider RFP for the case $n=2k+1$ and $k=2$.
If $\mathrm{transr}=\text{'N'}$. ar holds a as follows:
• if $\mathrm{uplo}=\text{'U'}$ the upper trapezoid $\mathrm{ar}\left(1:5,1:3\right)$ consists of the last three columns of a upper. The lower triangle $\mathrm{ar}\left(4:5,1:2\right)$ consists of the transpose of the first two columns of a upper;
• if $\mathrm{uplo}=\text{'L'}$ the lower trapezoid $\mathrm{ar}\left(1:5,1:3\right)$ consists of the first three columns of a lower. The upper triangle $\mathrm{ar}\left(1:2,2:3\right)$ consists of the transpose of the last two columns of a lower.
If $\mathrm{transr}=\text{'T'}$. ar in both uplo cases is just the transpose of ar as defined when $\mathrm{transr}=\text{'N'}$.
uplo Triangle of matrix $\mathbf{A}$ Rectangular Full Packed matrix $\mathbf{AR}$ $\mathrm{transr}=\text{'N'}$ $\mathrm{transr}=\text{'T'}$ 'U' $\left(\begin{array}{lllll}\mathbf{00}& \mathbf{01}& 02& 03& 04\\ & \mathbf{11}& 12& 13& 14\\ & & 22& 23& 24\\ & & & 33& 34\\ & & & & 44\end{array}\right)$ $\begin{array}{ccc}02& 03& 04\\ 12& 13& 14\\ 22& 23& 24\\ \mathbf{00}& 33& 34\\ \mathbf{01}& \mathbf{11}& 44\end{array}$ $\begin{array}{ccccc}02& 12& 22& \mathbf{00}& \mathbf{01}\\ 03& 13& 23& 33& \mathbf{11}\\ 04& 14& 24& 34& 44\end{array}$ 'L' $\left(\begin{array}{l}00\\ 10& 11\\ 20& 21& 22\\ 30& 31& 32& \mathbf{33}\\ 40& 41& 42& \mathbf{43}& \mathbf{44}\end{array}\right)$ $\begin{array}{ccc}00& \mathbf{33}& \mathbf{43}\\ 10& 11& \mathbf{44}\\ 20& 21& 22\\ 30& 31& 32\\ 40& 41& 42\end{array}$ $\begin{array}{cccccc}00& 10& 20& 30& 40& 50\\ \mathbf{33}& 11& 21& 31& 41& 51\\ \mathbf{43}& \mathbf{44}& 22& 32& 42& 52\end{array}$
Explicitly, in the real matrix case, ar is a one-dimensional array of length $n\left(n+1\right)/2$ and contains the elements of a as follows:
for $\mathrm{uplo}=\text{'U'}$ and $\mathrm{transr}=\text{'N'}$,
${a}_{ij}$ is stored in $\mathrm{ar}\left(\left(2k+1\right)\left(\mathit{i}-1\right)+\mathit{j}+k+1\right)$, for $1\le \mathit{j}\le k$ and $1\le i\le \mathit{j}$, and
${a}_{ij}$ is stored in $\mathrm{ar}\left(\left(2k+1\right)\left(\mathit{j}-k-1\right)+i\right)$, for $k and $1\le i\le j$;
for $\mathrm{uplo}=\text{'U'}$ and $\mathrm{transr}=\text{'T'}$,
${a}_{ij}$ is stored in $\mathrm{ar}\left(q\left(j+k\right)+i\right)$, for $1\le j\le k$ and $1\le i\le j$, and
${a}_{ij}$ is stored in $\mathrm{ar}\left(q\left(i-1\right)+\mathit{j}-k\right)$, for $k and $1\le i\le j$;
for $\mathrm{uplo}=\text{'L'}$ and $\mathrm{transr}=\text{'N'}$,
${a}_{ij}$ is stored in $\mathrm{ar}\left(\left(2k+1\right)\left(j-1\right)+i+k-q+1\right)$, for $1\le j\le q$ and $j\le i\le n$, and
${a}_{ij}$ is stored in $\mathrm{ar}\left(\left(2k+1\right)\left(i-k-1\right)+j-q\right)$, for $q and $j\le i\le n$;
for $\mathrm{uplo}=\text{'L'}$ and $\mathrm{transr}=\text{'T'}$,
${a}_{ij}$ is stored in $\mathrm{ar}\left(q\left(i+k-q\right)+j\right)$, for $1\le j\le q$ and $1\le i\le n$, and
${a}_{ij}$ is stored in $\mathrm{ar}\left(q\left(j-1-q\right)+i-k\right)$, for $q and $1\le i\le n$.
In the case of complex matrices, the assumption is that the full matrix, if it existed, would be Hermitian. Thus, when $\mathrm{transr}=\text{'N'}$, the triangular portion of a that is, in the real case, transposed into the notional $\left(2k+1\right)$ by $q$ RFP matrix is also conjugated. When $\mathrm{transr}=\text{'C'}$ the notional $q$ by $\left(2k+1\right)$ RFP matrix is the conjugate transpose of the corresponding $\mathrm{transr}=\text{'N'}$ RFP matrix. Explicitly, for complex a, the array ar contains the elements (or conjugated elements) of a as follows:
for $\mathrm{uplo}=\text{'U'}$ and $\mathrm{transr}=\text{'N'}$,
${\stackrel{-}{a}}_{ij}$ is stored in $\mathrm{ar}\left(\left(2k+1\right)\left(\mathit{i}-1\right)+\mathit{j}+k+1\right)$, for $1\le \mathit{j}\le k$ and $1\le i\le \mathit{j}$, and
${a}_{ij}$ is stored in $\mathrm{ar}\left(\left(2k+1\right)\left(\mathit{j}-k-1\right)+i\right)$, for $k and $1\le i\le j$;
for $\mathrm{uplo}=\text{'U'}$ and $\mathrm{transr}=\text{'C'}$,
${a}_{ij}$ is stored in $\mathrm{ar}\left(q\left(j+k\right)+i\right)$, for $1\le j\le k$ and $1\le i\le j$, and
${\stackrel{-}{a}}_{ij}$ is stored in $\mathrm{ar}\left(q\left(i-1\right)+\mathit{j}-k\right)$, for $k and $1\le i\le j$;
for $\mathrm{uplo}=\text{'L'}$ and $\mathrm{transr}=\text{'N'}$,
${a}_{ij}$ is stored in $\mathrm{ar}\left(\left(2k+1\right)\left(j-1\right)+i+k-q+1\right)$, for $1\le j\le q$ and $j\le i\le n$, and
${\stackrel{-}{a}}_{ij}$ is stored in $\mathrm{ar}\left(\left(2k+1\right)\left(i-k-1\right)+j-q\right)$, for $q and $j\le i\le n$;
for $\mathrm{uplo}=\text{'L'}$ and $\mathrm{transr}=\text{'C'}$,
${\stackrel{-}{a}}_{ij}$ is stored in $\mathrm{ar}\left(q\left(i+k-q\right)+j\right)$, for $1\le j\le q$ and $1\le i\le n$, and
${a}_{ij}$ is stored in $\mathrm{ar}\left(q\left(j-1-q\right)+i-k\right)$, for $q and $1\le i\le n$.
#### Band storage
A band matrix with ${k}_{l}$ subdiagonals and ${k}_{u}$ superdiagonals may be stored compactly in a two-dimensional array with ${k}_{l}+{k}_{u}+1$ rows and $n$ columns. Columns of the matrix are stored in corresponding columns of the array, and diagonals of the matrix are stored in rows of the array. This storage scheme should be used in practice only if ${k}_{l}$, ${k}_{u}\ll n$, although the functions in Chapters F07 and F08 work correctly for all values of ${k}_{l}$ and ${k}_{u}$. In Chapters F07 and F08 arrays which hold matrices in band storage have names ending in $\mathrm{B}$.
To be precise, elements of matrix elements ${a}_{ij}$ are stored as follows:
• ${a}_{ij}$ is stored in $\mathrm{ab}\left({k}_{u}+1+i-j,j\right)$ for $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,j-{k}_{u}\right)\le i\le \mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(n,j+{k}_{l}\right)$.
For example, when $n=5$, ${k}_{l}=2$ and ${k}_{u}=1$:
Band matrix $\mathbf{A}$ Band storage in array ab $\left(\begin{array}{lllll}{a}_{11}& {a}_{12}& & & \\ {a}_{21}& {a}_{22}& {a}_{23}& & \\ {a}_{31}& {a}_{32}& {a}_{33}& {a}_{34}& \\ & {a}_{42}& {a}_{43}& {a}_{44}& {a}_{45}\\ & & {a}_{53}& {a}_{54}& {a}_{55}\end{array}\right)$ $\begin{array}{ccccc}& & & & \\ \text{*}& {a}_{12}& {a}_{23}& {a}_{34}& {a}_{45}\\ {a}_{11}& {a}_{22}& {a}_{33}& {a}_{44}& {a}_{55}\\ {a}_{21}& {a}_{32}& {a}_{43}& {a}_{54}& \text{*}\\ {a}_{31}& {a}_{42}& {a}_{53}& \text{*}& \text{*}\end{array}$
The elements marked $*$ in the upper left and lower right corners of the array ab need not be set, and are not referenced by the functions.
Note: when a general band matrix is supplied for $LU$ factorization, space must be allowed to store an additional ${k}_{l}$ superdiagonals, generated by fill-in as a result of row interchanges. This means that the matrix is stored according to the above scheme, but with ${k}_{l}+{k}_{u}$ superdiagonals.
Triangular band matrices are stored in the same format, with either ${k}_{l}=0$ if upper triangular, or ${k}_{u}=0$ if lower triangular.
For symmetric or Hermitian band matrices with $k$ subdiagonals or superdiagonals, only the upper or lower triangle (as specified by uplo) need be stored:
• if $\mathrm{uplo}=\text{'U'}$, ${a}_{ij}$ is stored in $\mathrm{ab}\left(k+1+i-j,j\right)$ for $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,j-k\right)\le i\le j$;
• if $\mathrm{uplo}=\text{'L'}$, ${a}_{ij}$ is stored in $\mathrm{ab}\left(1+i-j,j\right)$ for $j\le i\le \mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(n,j+k\right)$.
For example, when $n=5$ and $k=2$:
uplo Hermitian band matrix $\mathbit{A}$ Band storage in array ab 'U' $\left(\begin{array}{lllll}{a}_{11}& {a}_{12}& {a}_{13}& & \\ {\stackrel{-}{a}}_{12}& {a}_{22}& {a}_{23}& {a}_{24}& \\ {\stackrel{-}{a}}_{13}& {\stackrel{-}{a}}_{23}& {a}_{33}& {a}_{34}& {a}_{35}\\ & {\stackrel{-}{a}}_{24}& {\stackrel{-}{a}}_{34}& {a}_{44}& {a}_{45}\\ & & {\stackrel{-}{a}}_{35}& {\stackrel{-}{a}}_{45}& {a}_{55}\end{array}\right)$ $\begin{array}{lllll}\text{*}& \text{*}& {a}_{13}& {a}_{24}& {a}_{35}\\ \text{*}& {a}_{12}& {a}_{23}& {a}_{34}& {a}_{45}\\ {a}_{11}& {a}_{22}& {a}_{33}& {a}_{44}& {a}_{55}\end{array}$ 'L' $\left(\begin{array}{lllll}{a}_{11}& {\stackrel{-}{a}}_{21}& {\stackrel{-}{a}}_{31}& & \\ {a}_{21}& {a}_{22}& {\stackrel{-}{a}}_{32}& {\stackrel{-}{a}}_{42}& \\ {a}_{31}& {a}_{32}& {a}_{33}& {\stackrel{-}{a}}_{43}& {\stackrel{-}{a}}_{53}\\ & {a}_{42}& {a}_{43}& {a}_{44}& {\stackrel{-}{a}}_{54}\\ & & {a}_{53}& {a}_{54}& {a}_{55}\end{array}\right)$ $\begin{array}{lllll}{a}_{11}& {a}_{22}& {a}_{33}& {a}_{44}& {a}_{55}\\ {a}_{21}& {a}_{32}& {a}_{43}& {a}_{54}& \text{*}\\ {a}_{31}& {a}_{42}& {a}_{53}& \text{*}& \text{*}\end{array}$
Note that different storage schemes for band matrices are used by some functions in Chapters F01, F02, F03 and F04.
#### Unit triangular matrices
Some functions in this chapter have an option to handle unit triangular matrices (that is, triangular matrices with diagonal elements $\text{}=1$). This option is specified by a argument diag. If $\mathrm{diag}=\text{'U'}$ (Unit triangular), the diagonal elements of the matrix need not be stored, and the corresponding array elements are not referenced by the functions. The storage scheme for the rest of the matrix (whether conventional, packed or band) remains unchanged.
#### Real diagonal elements of complex matrices
Complex Hermitian matrices have diagonal elements that are by definition purely real. In addition, complex triangular matrices which arise in Cholesky factorization are defined by the algorithm to have real diagonal elements.
If such matrices are supplied as input to functions in Chapters F07 and F08, the imaginary parts of the diagonal elements are not referenced, but are assumed to be zero. If such matrices are returned as output by the functions, the computed imaginary parts are explicitly set to zero.
### Parameter Conventions
#### Option arguments
Most functions in this chapter have one or more option arguments, of type string. The descriptions in Section 5 of the function documents refer only to upper-case values (for example $\mathrm{uplo}=\text{'U'}$ or $\text{'L'}$); however, in every case, the corresponding lower-case characters may be supplied (with the same meaning). Any other value is illegal.
A longer character string can be passed as the actual argument, making the calling program more readable, but only the first character is significant. For example:
```[b, info] = f07ae('Transpose', a, ipiv, b);
```
#### Problem dimensions
It is permissible for the problem dimensions (for example, m in nag_lapack_dgetrf (f07ad), n or nrhs_p in nag_lapack_dgetrs (f07ae)) to be passed as zero, in which case the computation (or part of it) is skipped. Negative dimensions are regarded as an error.
### Tables of Driver and Computational Functions
#### Real matrices
Type of matrix and storage scheme Operation general general band general tridiagonal driver nag_lapack_dgesv (f07aa) nag_lapack_dgbsv (f07ba) nag_lapack_dgtsv (f07ca) expert driver nag_lapack_dgesvx (f07ab) nag_lapack_dgbsvx (f07bb) nag_lapack_dgtsvx (f07cb) mixed precision driver nag_lapack_dsgesv (f07ac) factorize nag_lapack_dgetrf (f07ad) nag_lapack_dgbtrf (f07bd) nag_lapack_dgttrf (f07cd) solve nag_lapack_dgetrs (f07ae) nag_lapack_dgbtrs (f07be) nag_lapack_dgttrs (f07ce) scaling factors nag_lapack_dgeequ (f07af) nag_lapack_dgbequ (f07bf) condition number nag_lapack_dgecon (f07ag) nag_lapack_dgbcon (f07bg) nag_lapack_dgtcon (f07cg) error estimate nag_lapack_dgerfs (f07ah) nag_lapack_dgbrfs (f07bh) nag_lapack_dgtrfs (f07ch) invert nag_lapack_dgetri (f07aj)
Table 1
Functions for real general matrices
Type of matrix and storage scheme Operation symmetric positive definite symmetric positive definite (packed storage) symmetric positive definite (RFP storage) symmetric positive definite band symmetric positive definite tridiagonal symmetric positive semidefinite driver nag_lapack_dposv (f07fa) nag_lapack_dppsv (f07ga) nag_lapack_dpbsv (f07ha) nag_lapack_dptsv (f07ja) expert driver nag_lapack_dposvx (f07fb) nag_lapack_dppsvx (f07gb) nag_lapack_dpbsvx (f07hb) nag_lapack_dptsvx (f07jb) mixed precision nag_lapack_dsposv (f07fc) factorize nag_lapack_dpotrf (f07fd) nag_lapack_dpptrf (f07gd) nag_lapack_dpftrf (f07wd) nag_lapack_dpbtrf (f07hd) nag_lapack_dpttrf (f07jd) nag_lapack_dpstrf (f07kd) solve nag_lapack_dpotrs (f07fe) nag_lapack_dpptrs (f07ge) nag_lapack_dpftrs (f07we) nag_lapack_dpbtrs (f07he) nag_lapack_dpttrs (f07je) scaling factors nag_lapack_dpoequ (f07ff) nag_lapack_dppequ (f07gf) nag_lapack_dpbequ (f07hf) condition number nag_lapack_dpocon (f07fg) nag_lapack_dppcon (f07gg) nag_lapack_dpbcon (f07hg) nag_lapack_dptcon (f07jg) error estimate nag_lapack_dporfs (f07fh) nag_lapack_dpprfs (f07gh) nag_lapack_dpbrfs (f07hh) nag_lapack_dptrfs (f07jh) invert nag_lapack_dpotri (f07fj) nag_lapack_dpptri (f07gj) nag_lapack_dpftri (f07wj)
Table 2
Functions for real symmetric positive definite and positive semidefinite matrices
Type of matrix and storage scheme Operation symmetric indefinite symmetric indefinite (packed storage) driver nag_lapack_dsysv (f07ma) nag_lapack_dspsv (f07pa) expert driver nag_lapack_dsysvx (f07mb) nag_lapack_dspsvx (f07pb) factorize nag_lapack_dsytrf (f07md) nag_lapack_dsptrf (f07pd) solve nag_lapack_dsytrs (f07me) nag_lapack_dsptrs (f07pe) condition number nag_lapack_dsycon (f07mg) nag_lapack_dspcon (f07pg) error estimate nag_lapack_dsyrfs (f07mh) nag_lapack_dsprfs (f07ph) invert nag_lapack_dsytri (f07mj) nag_lapack_dsptri (f07pj)
Table 3
Functions for real symmetric indefinite matrices
Type of matrix and storage scheme Operation triangular triangular (packed storage) triangular (RFP storage) triangular band solve nag_lapack_dtrtrs (f07te) nag_lapack_dtptrs (f07ue) nag_lapack_dtbtrs (f07ve) condition number nag_lapack_dtrcon (f07tg) nag_lapack_dtpcon (f07ug) nag_lapack_dtbcon (f07vg) error estimate nag_lapack_dtrrfs (f07th) nag_lapack_dtprfs (f07uh) nag_lapack_dtbrfs (f07vh) invert nag_lapack_dtrtri (f07tj) nag_lapack_dtptri (f07uj) nag_lapack_dtftri (f07wk)
Table 4
Functions for real triangular matrices
#### Complex matrices
Type of matrix and storage scheme Operation general general band general tridiagonal driver nag_lapack_zgesv (f07an) nag_lapack_zgbsv (f07bn) nag_lapack_zgtsv (f07cn) expert driver nag_lapack_zgesvx (f07ap) nag_lapack_zgbsvx (f07bp) nag_lapack_zgtsvx (f07cp) mixed precision driver nag_lapack_zcgesv (f07aq) factorize nag_lapack_zgetrf (f07ar) nag_lapack_zgbtrf (f07br) nag_lapack_zgttrf (f07cr) solve nag_lapack_zgetrs (f07as) nag_lapack_zgbtrs (f07bs) nag_lapack_zgttrs (f07cs) scaling factors nag_lapack_zgeequ (f07at) nag_lapack_zgbequ (f07bt) condition number nag_lapack_zgecon (f07au) nag_lapack_zgbcon (f07bu) nag_lapack_zgtcon (f07cu) error estimate nag_lapack_zgerfs (f07av) nag_lapack_zgbrfs (f07bv) nag_lapack_zgtrfs (f07cv) invert nag_lapack_zgetri (f07aw)
Table 5
Functions for complex general matrices
Type of matrix and storage scheme Operation Hermitian positive definite Hermitian positive definite (packed storage) Hermitian positive definite (RFP storage) Hermitian positive definite band Hermitian positive definite tridiagonal Hermitian positive semidefinite driver nag_lapack_zposv (f07fn) nag_lapack_zppsv (f07gn) nag_lapack_zpbsv (f07hn) nag_lapack_zptsv (f07jn) expert driver nag_lapack_zposvx (f07fp) nag_lapack_zppsvx (f07gp) nag_lapack_zpbsvx (f07hp) nag_lapack_zptsvx (f07jp) mixed precision driver nag_lapack_zcposv (f07fq) factorize nag_lapack_zpotrf (f07fr) nag_lapack_zpptrf (f07gr) nag_lapack_zpftrf (f07wr) nag_lapack_zpbtrf (f07hr) nag_lapack_zpttrf (f07jr) nag_lapack_zpstrf (f07kr) solve nag_lapack_zpotrs (f07fs) nag_lapack_zpptrs (f07gs) nag_lapack_zpftrs (f07ws) nag_lapack_zpbtrs (f07hs) nag_lapack_zpttrs (f07js) scaling factors nag_lapack_zpoequ (f07ft) nag_lapack_zppequ (f07gt) condition number nag_lapack_zpocon (f07fu) nag_lapack_zppcon (f07gu) nag_lapack_zpbcon (f07hu) nag_lapack_zptcon (f07ju) error estimate nag_lapack_zporfs (f07fv) nag_lapack_zpprfs (f07gv) nag_lapack_zpbrfs (f07hv) nag_lapack_zptrfs (f07jv) invert nag_lapack_zpotri (f07fw) nag_lapack_zpptri (f07gw) nag_lapack_zpftri (f07ww)
Table 6
Functions for complex Hermitian positive definite and positive semidefinite matrices
Type of matrix and storage scheme Operation Hermitian indefinite symmetric indefinite (packed storage) Hermitian indefinite band symmetric indefinite tridiagonal driver nag_lapack_zhesv (f07mn) nag_lapack_zsysv (f07nn) nag_lapack_zhpsv (f07pn) nag_lapack_zspsv (f07qn) expert driver nag_lapack_zhesvx (f07mp) nag_lapack_zsysvx (f07np) nag_lapack_zhpsvx (f07pp) nag_lapack_zspsvx (f07qp) factorize nag_lapack_zhetrf (f07mr) nag_lapack_zsytrf (f07nr) nag_lapack_zhptrf (f07pr) nag_lapack_zsptrf (f07qr) solve nag_lapack_zhetrs (f07ms) nag_lapack_zsytrs (f07ns) nag_lapack_zhptrs (f07ps) nag_lapack_zsptrs (f07qs) condition number nag_lapack_zhecon (f07mu) nag_lapack_zsycon (f07nu) nag_lapack_zhpcon (f07pu) nag_lapack_zspcon (f07qu) error estimate nag_lapack_zherfs (f07mv) nag_lapack_zsyrfs (f07nv) nag_lapack_zhprfs (f07pv) nag_lapack_zsprfs (f07qv) invert nag_lapack_zhetri (f07mw) nag_lapack_zsytri (f07nw) nag_lapack_zhptri (f07pw) nag_lapack_zsptri (f07qw)
Table 7
Functions for complex Hermitian and symmetric indefinite matrices
Type of matrix and storage scheme Operation triangular triangular (packed storage) triangular (RFP storage) triangular band solve nag_lapack_ztrtrs (f07ts) nag_lapack_ztptrs (f07us) nag_lapack_ztbtrs (f07vs) condition number nag_lapack_ztrcon (f07tu) nag_lapack_ztpcon (f07uu) nag_lapack_ztbcon (f07vu) error estimate nag_lapack_ztrrfs (f07tv) nag_lapack_ztprfs (f07uv) nag_lapack_ztbrfs (f07vv) invert nag_lapack_ztrtri (f07tw) nag_lapack_ztptri (f07uw) nag_lapack_ztftri (f07wx)
Table 8
Functions for complex triangular matrices
## Functionality Index
Apply iterative refinement to the solution and compute error estimates,
after factorizing the matrix of coefficients,
complex band matrix nag_lapack_zgbrfs (f07bv)
complex Hermitian indefinite matrix nag_lapack_zherfs (f07mv)
complex Hermitian indefinite matrix, packed storage nag_lapack_zhprfs (f07pv)
complex Hermitian positive definite band matrix nag_lapack_zpbrfs (f07hv)
complex Hermitian positive definite matrix nag_lapack_zporfs (f07fv)
complex Hermitian positive definite matrix, packed storage nag_lapack_zpprfs (f07gv)
complex Hermitian positive definite tridiagonal matrix nag_lapack_zptrfs (f07jv)
complex matrix nag_lapack_zgerfs (f07av)
complex symmetric indefinite matrix nag_lapack_zsyrfs (f07nv)
complex symmetric indefinite matrix, packed storage nag_lapack_zsprfs (f07qv)
complex tridiagonal matrix nag_lapack_zgtrfs (f07cv)
real band matrix nag_lapack_dgbrfs (f07bh)
real matrix nag_lapack_dgerfs (f07ah)
real symmetric indefinite matrix nag_lapack_dsyrfs (f07mh)
real symmetric indefinite matrix, packed storage nag_lapack_dsprfs (f07ph)
real symmetric positive definite band matrix nag_lapack_dpbrfs (f07hh)
real symmetric positive definite matrix nag_lapack_dporfs (f07fh)
real symmetric positive definite matrix, packed storage nag_lapack_dpprfs (f07gh)
real symmetric positive definite tridiagonal matrix nag_lapack_dptrfs (f07jh)
real tridiagonal matrix nag_lapack_dgtrfs (f07ch)
Compute error estimates,
complex triangular band matrix nag_lapack_ztbrfs (f07vv)
complex triangular matrix nag_lapack_ztrrfs (f07tv)
complex triangular matrix, packed storage nag_lapack_ztprfs (f07uv)
real triangular band matrix nag_lapack_dtbrfs (f07vh)
real triangular matrix nag_lapack_dtrrfs (f07th)
real triangular matrix, packed storage nag_lapack_dtprfs (f07uh)
Compute row and column scalings,
complex band matrix nag_lapack_zgbequ (f07bt)
complex Hermitian positive definite band matrix nag_lapack_zpbequ (f07ht)
complex Hermitian positive definite matrix nag_lapack_zpoequ (f07ft)
complex Hermitian positive definite matrix, packed storage nag_lapack_zppequ (f07gt)
complex matrix nag_lapack_zgeequ (f07at)
real band matrix nag_lapack_dgbequ (f07bf)
real matrix nag_lapack_dgeequ (f07af)
real symmetric positive definite band matrix nag_lapack_dpbequ (f07hf)
real symmetric positive definite matrix nag_lapack_dpoequ (f07ff)
real symmetric positive definite matrix, packed storage nag_lapack_dppequ (f07gf)
Condition number estimation,
after factorizing the matrix of coefficients,
complex band matrix nag_lapack_zgbcon (f07bu)
complex Hermitian indefinite matrix nag_lapack_zhecon (f07mu)
complex Hermitian indefinite matrix, packed storage nag_lapack_zhpcon (f07pu)
complex Hermitian positive definite band matrix nag_lapack_zpbcon (f07hu)
complex Hermitian positive definite matrix nag_lapack_zpocon (f07fu)
complex Hermitian positive definite matrix, packed storage nag_lapack_zppcon (f07gu)
complex Hermitian positive definite tridiagonal matrix nag_lapack_zptcon (f07ju)
complex matrix nag_lapack_zgecon (f07au)
complex symmetric indefinite matrix nag_lapack_zsycon (f07nu)
complex symmetric indefinite matrix, packed storage nag_lapack_zspcon (f07qu)
complex tridiagonal matrix nag_lapack_zgtcon (f07cu)
real band matrix nag_lapack_dgbcon (f07bg)
real matrix nag_lapack_dgecon (f07ag)
real symmetric indefinite matrix nag_lapack_dsycon (f07mg)
real symmetric indefinite matrix, packed storage nag_lapack_dspcon (f07pg)
real symmetric positive definite band matrix nag_lapack_dpbcon (f07hg)
real symmetric positive definite matrix nag_lapack_dpocon (f07fg)
real symmetric positive definite matrix, packed storage nag_lapack_dppcon (f07gg)
real symmetric positive definite tridiagonal matrix nag_lapack_dptcon (f07jg)
real tridiagonal matrix nag_lapack_dgtcon (f07cg)
complex triangular band matrix nag_lapack_ztbcon (f07vu)
complex triangular matrix nag_lapack_ztrcon (f07tu)
complex triangular matrix, packed storage nag_lapack_ztpcon (f07uu)
real triangular band matrix nag_lapack_dtbcon (f07vg)
real triangular matrix nag_lapack_dtrcon (f07tg)
real triangular matrix, packed storage nag_lapack_dtpcon (f07ug)
LDLT factorization,
complex Hermitian positive definite tridiagonal matrix nag_lapack_zpttrf (f07jr)
real symmetric positive definite tridiagonal matrix nag_lapack_dpttrf (f07jd)
LLT or UTU factorization,
complex Hermitian positive definite band matrix nag_lapack_zpbtrf (f07hr)
complex Hermitian positive definite matrix nag_lapack_zpotrf (f07fr)
complex Hermitian positive definite matrix, packed storage nag_lapack_zpptrf (f07gr)
complex Hermitian positive definite matrix, RFP storage nag_lapack_zpftrf (f07wr)
complex Hermitian positive semidefinite matrix nag_lapack_zpstrf (f07kr)
real symmetric positive definite band matrix nag_lapack_dpbtrf (f07hd)
real symmetric positive definite matrix nag_lapack_dpotrf (f07fd)
real symmetric positive definite matrix, packed storage nag_lapack_dpptrf (f07gd)
real symmetric positive definite matrix, RFP storage nag_lapack_dpftrf (f07wd)
real symmetric positive semidefinite matrix nag_lapack_dpstrf (f07kd)
LU factorization,
complex band matrix nag_lapack_zgbtrf (f07br)
complex matrix nag_lapack_zgetrf (f07ar)
complex tridiagonal matrix nag_lapack_zgttrf (f07cr)
real band matrix nag_lapack_dgbtrf (f07bd)
real tridiagonal matrix nag_lapack_dgttrf (f07cd)
Matrix inversion,
after factorizing the matrix of coefficients,
complex Hermitian indefinite matrix nag_lapack_zhetri (f07mw)
complex Hermitian indefinite matrix, packed storage nag_lapack_zhptri (f07pw)
complex Hermitian positive definite matrix nag_lapack_zpotri (f07fw)
complex Hermitian positive definite matrix, packed storage nag_lapack_zpptri (f07gw)
complex Hermitian positive definite matrix, RFP storage nag_lapack_zpftri (f07ww)
complex matrix nag_lapack_zgetri (f07aw)
complex symmetric indefinite matrix nag_lapack_zsytri (f07nw)
complex symmetric indefinite matrix, packed storage nag_lapack_zsptri (f07qw)
real matrix nag_lapack_dgetri (f07aj)
real symmetric indefinite matrix nag_lapack_dsytri (f07mj)
real symmetric indefinite matrix, packed storage nag_lapack_dsptri (f07pj)
real symmetric positive definite matrix nag_lapack_dpotri (f07fj)
real symmetric positive definite matrix, packed storage nag_lapack_dpptri (f07gj)
real symmetric positive definite matrix, RFP storage nag_lapack_dpftri (f07wj)
complex triangular matrix nag_lapack_ztrtri (f07tw)
complex triangular matrix, packed storage nag_lapack_ztptri (f07uw)
complex triangular matrix, RFP storage,
expert driver nag_lapack_ztftri (f07wx)
real triangular matrix nag_lapack_dtrtri (f07tj)
real triangular matrix, packed storage nag_lapack_dtptri (f07uj)
real triangular matrix, RFP storage,
expert driver nag_lapack_dtftri (f07wk)
PLDLTPT or PUDUTPT factorization,
complex Hermitian indefinite matrix nag_lapack_zhetrf (f07mr)
complex Hermitian indefinite matrix, packed storage nag_lapack_zhptrf (f07pr)
complex symmetric indefinite matrix nag_lapack_zsytrf (f07nr)
complex symmetric indefinite matrix, packed storage nag_lapack_zsptrf (f07qr)
real symmetric indefinite matrix nag_lapack_dsytrf (f07md)
real symmetric indefinite matrix, packed storage nag_lapack_dsptrf (f07pd)
Solution of simultaneous linear equations,
after factorizing the matrix of coefficients,
complex band matrix nag_lapack_zgbtrs (f07bs)
complex Hermitian indefinite matrix nag_lapack_zhetrs (f07ms)
complex Hermitian indefinite matrix, packed storage nag_lapack_zhptrs (f07ps)
complex Hermitian positive definite band matrix nag_lapack_zpbtrs (f07hs)
complex Hermitian positive definite matrix nag_lapack_zpotrs (f07fs)
complex Hermitian positive definite matrix, packed storage nag_lapack_zpptrs (f07gs)
complex Hermitian positive definite matrix, RFP storage nag_lapack_zpftrs (f07ws)
complex Hermitian positive definite tridiagonal matrix nag_lapack_zpttrs (f07js)
complex matrix nag_lapack_zgetrs (f07as)
complex symmetric indefinite matrix nag_lapack_zsytrs (f07ns)
complex symmetric indefinite matrix, packed storage nag_lapack_zsptrs (f07qs)
complex tridiagonal matrix nag_lapack_zgttrs (f07cs)
real band matrix nag_lapack_dgbtrs (f07be)
real matrix nag_lapack_dgetrs (f07ae)
real symmetric indefinite matrix nag_lapack_dsytrs (f07me)
real symmetric indefinite matrix, packed storage nag_lapack_dsptrs (f07pe)
real symmetric positive definite band matrix nag_lapack_dpbtrs (f07he)
real symmetric positive definite matrix nag_lapack_dpotrs (f07fe)
real symmetric positive definite matrix, packed storage nag_lapack_dpptrs (f07ge)
real symmetric positive definite matrix, RFP storage nag_lapack_dpftrs (f07we)
real symmetric positive definite tridiagonal matrix nag_lapack_dpttrs (f07je)
real tridiagonal matrix nag_lapack_dgttrs (f07ce)
expert drivers (with condition and error estimation):
complex band matrix nag_lapack_zgbsvx (f07bp)
complex Hermitian indefinite matrix nag_lapack_zhesvx (f07mp)
complex Hermitian indefinite matrix, packed storage nag_lapack_zhpsvx (f07pp)
complex Hermitian positive definite band matrix nag_lapack_zpbsvx (f07hp)
complex Hermitian positive definite matrix nag_lapack_zposvx (f07fp)
complex Hermitian positive definite matrix, packed storage nag_lapack_zppsvx (f07gp)
complex Hermitian positive definite tridiagonal matrix nag_lapack_zptsvx (f07jp)
complex matrix nag_lapack_zgesvx (f07ap)
complex symmetric indefinite matrix nag_lapack_zsysvx (f07np)
complex symmetric indefinite matrix, packed storage nag_lapack_zspsvx (f07qp)
complex tridiagonal matrix nag_lapack_zgtsvx (f07cp)
real band matrix nag_lapack_dgbsvx (f07bb)
real matrix nag_lapack_dgesvx (f07ab)
real symmetric indefinite matrix nag_lapack_dsysvx (f07mb)
real symmetric indefinite matrix, packed storage nag_lapack_dspsvx (f07pb)
real symmetric positive definite band matrix nag_lapack_dpbsvx (f07hb)
real symmetric positive definite matrix nag_lapack_dposvx (f07fb)
real symmetric positive definite matrix, packed storage nag_lapack_dppsvx (f07gb)
real symmetric positive definite tridiagonal matrix nag_lapack_dptsvx (f07jb)
real tridiagonal matrix nag_lapack_dgtsvx (f07cb)
simple drivers,
complex band matrix nag_lapack_zgbsv (f07bn)
complex Hermitian indefinite matrix nag_lapack_zhesv (f07mn)
complex Hermitian indefinite matrix, packed storage nag_lapack_zhpsv (f07pn)
complex Hermitian positive definite band matrix nag_lapack_zpbsv (f07hn)
complex Hermitian positive definite matrix nag_lapack_zposv (f07fn)
complex Hermitian positive definite matrix, packed storage nag_lapack_zppsv (f07gn)
complex Hermitian positive definite matrix, using mixed precision nag_lapack_zcposv (f07fq)
complex Hermitian positive definite tridiagonal matrix nag_lapack_zptsv (f07jn)
complex matrix nag_lapack_zgesv (f07an)
complex matrix, using mixed precision nag_lapack_zcgesv (f07aq)
complex symmetric indefinite matrix nag_lapack_zsysv (f07nn)
complex symmetric indefinite matrix, packed storage nag_lapack_zspsv (f07qn)
complex triangular band matrix nag_lapack_ztbtrs (f07vs)
complex triangular matrix nag_lapack_ztrtrs (f07ts)
complex triangular matrix, packed storage nag_lapack_ztptrs (f07us)
complex tridiagonal matrix nag_lapack_zgtsv (f07cn)
real band matrix nag_lapack_dgbsv (f07ba)
real matrix nag_lapack_dgesv (f07aa)
real matrix, using mixed precision nag_lapack_dsgesv (f07ac)
real symmetric indefinite matrix nag_lapack_dsysv (f07ma)
real symmetric indefinite matrix, packed storage nag_lapack_dspsv (f07pa)
real symmetric positive definite band matrix nag_lapack_dpbsv (f07ha)
real symmetric positive definite matrix nag_lapack_dposv (f07fa)
real symmetric positive definite matrix, packed storage nag_lapack_dppsv (f07ga)
real symmetric positive definite matrix, using mixed precision nag_lapack_dsposv (f07fc)
real symmetric positive definite tridiagonal matrix nag_lapack_dptsv (f07ja)
real triangular band matrix nag_lapack_dtbtrs (f07ve)
real triangular matrix nag_lapack_dtrtrs (f07te)
real triangular matrix, packed storage nag_lapack_dtptrs (f07ue)
real tridiagonal matrix nag_lapack_dgtsv (f07ca)
## References
Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia
Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
Higham N J (1988) Algorithm 674: Fortran codes for estimating the one-norm of a real or complex matrix, with applications to condition estimation ACM Trans. Math. Software 14 381–396
Wilkinson J H (1965) The Algebraic Eigenvalue Problem Oxford University Press, Oxford
|
2017-11-23 18:42:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 393, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9254978895187378, "perplexity": 1789.6150726580079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806856.86/warc/CC-MAIN-20171123180631-20171123200631-00243.warc.gz"}
|
https://tex.stackexchange.com/questions/280056/including-compiled-pdfs-in-documentation-built-by-l3build
|
# Including compiled PDFs in documentation built by l3build
I am building a project with l3build. There are some examples I want to present in the documentation, showing their source code, annotated with DocStrip, as well as including the output they produce using \includegraphics to pull in the PDF generated from the extracted example source code. It all looks roughly like this:
• examples.dtx:
% \begin{macrocode}
\documentclass[a8paper]{scrartcl}
\begin{document}
%<*italic>
% \end{macrocode}
% Here is some italic text.
% \begin{macrocode}
\textit{Lorem ipsum}
% \end{macrocode}
% \begin{macrocode}
%</italic>
%<*bold>
% \end{macrocode}
% And here is some bold text.
% \begin{macrocode}
\textbf{Lorem ipsum}
%</bold>
% \end{macrocode}
% That is all.
% \begin{macrocode}
\end{document}
% \end{macrocode}
• examples.ins:
\input docstrip.tex
\generate{\file{italic.tex}{\from{examples.dtx}{italic}}}
\generate{\file{bold.tex}{\from{examples.dtx}{bold}}}
\endbatchfile
• documentation.drv:
\documentclass{ltxdoc}
\usepackage{graphicx}
\begin{document}
Let me present some examples.
\DocInput{examples.dtx}
This is how the examples look when compiled. \\
\includegraphics{italic.pdf} \\
\includegraphics{bold.pdf}
\end{document}
• build.lua:
#!/usr/bin/env texlua
-- build.lua - not working
module = "example"
typesetfiles = {"documentation.drv"}
kpse.set_program_name("kpsewhich")
dofile(kpse.lookup("l3build.lua"))
• shell commands to do what build.lua is intended to do (typeset.sh):
tex *.ins
for doc in *.tex
do
pdflatex "\$doc"
done
pdflatex *.drv
I placed these files on GitHub for you to quickly download and test them.
I even thought of rather drastic measures like redefining the typeset function in terms of arara. But I cannot seem to configure l3build to compile the documentation correctly. The examples do not need to be distributed in any TDS or *.zip, but they have to be unpacked and compiled before typesetting the documentation. Can this be done with l3build?
|
2019-06-17 22:41:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5460337996482849, "perplexity": 8332.456277596433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998581.65/warc/CC-MAIN-20190617223249-20190618005249-00533.warc.gz"}
|
https://drupal.stackexchange.com/questions/16151/organic-groups-automatic-audience-selection/87023
|
# Organic groups automatic audience selection
I am working with organic groups. Within each group, permitted users are allowed to create their own pages.
What my problem is that I need to set the audience for a new page automatically as the group the user is currently in. This is so that the user cannot manually change the group the post should go into but instead it's automatically and permanently set ( unless an administrator changes it ).
Does anyone know of a way that I might accomplish that?
You can use the prepopulate module to pass the group's gid to the node add form.
For example, the link will looks something like this:
node/add/discussion?edit[group_audience][und]=4
You could then hide the group selection box with css so the user cannot change it. For a more robust solution you would code a form hook. Here is an example.
I like the idea of using a Rule and have gotten it to work using the following Rule config: Event=Before Saving Content, Condition=Content is of a Certain Type, Action: Entity=Node, Group=Site:Current-group - YES - works like a charm!
I know this is an old thread, but I found this in my own search.
I'm new to this, but I think this is going to work for me.
NEW "RULE"
Event = Before Saving New Content --- Node/Content type Action = Data/set data value/node:og-group-ref:0 (Groups Audience)
With Drupal 7, you don't need the Prepopulate module.
You can just use this node/add/blog?og_group_ref=1 where 1 is the group ID.
|
2019-10-16 09:30:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3139890432357788, "perplexity": 1486.3249058123315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666959.47/warc/CC-MAIN-20191016090425-20191016113925-00475.warc.gz"}
|
https://www.x-mol.com/paper/1330018841054584832
|
Journal of Group Theory ( IF 0.466 ) Pub Date : 2020-11-17 , DOI: 10.1515/jgth-2020-0107
Colin D. Reid
We classify the locally compact second-countable (l.c.s.c.) groups 𝐴 that are abelian and topologically characteristically simple. All such groups 𝐴 occur as the monolith of some soluble l.c.s.c. group 𝐺 of derived length at most 3; with known exceptions (specifically, when 𝐴 is $Qn$ or its dual for some $n∈N$), we can take 𝐺 to be compactly generated. This amounts to a classification of the possible isomorphism types of abelian chief factors of l.c.s.c. groups, which is of particular interest for the theory of compactly generated locally compact groups.
down
wechat
bug
|
2020-12-02 15:04:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8505380153656006, "perplexity": 1304.1224494061312}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141711306.69/warc/CC-MAIN-20201202144450-20201202174450-00631.warc.gz"}
|
https://mathematica.stackexchange.com/questions/103421/discrete-fourier-transform-inadequate-precision-and-wrong-peak-positions
|
# Discrete Fourier Transform - Inadequate precision and wrong peak positions
I have a small problem with a piece of code. I searched the site but was not able to find an answer to my specific question. I am trying a quick test of the Fourier capabilities (on lists) of Mathematica. I am using the following code
a := 0.1;
L[x_] := 1/(x^2 + 1);
s[x_] := L[x] + a Cos[x];
Note that the Fourier Transformation is $$\mathcal{F}(s(t)) (\omega)=\sqrt{\frac{\pi }{2}} e^{-\left| \omega \right| }+\sqrt{\frac{\pi }{2}} \delta (\omega -1)+\sqrt{\frac{\pi }{2}} \delta (\omega +1)$$ Then I create a list of values
STable1 := Table[s[t], {t, 0, 40, 0.001}];
and then I apply Fourier
ListLinePlot[Abs[Fourier[STable1]], PlotRange -> {{0, 30}, {0, 10}},Frame -> True]
this is what I get
Now the peak should be at 1 (theoretically it is a Dirac Delta function). My question is:
How to get more points (so a smoother function)?
• Have a look here – Stelios Jan 5 '16 at 22:40
• Thanks @Stelios I will check it! – Umberto Jan 7 '16 at 22:12
@Hugh has given a good answer, explaining how to increase the frequency-domain resolution by increasing the maximum measurement time tmax. The relationship is df=1/tmax. One additional point is that you have not matched the endpoints of your Cos function, so that the DFT implemented by Fourier will not return a delta function. Better resolution of your delta function component is achieved by setting the maximum measured time to an integral multiple of cosine periods. See here, for example.
tmax = 12*Pi;
dt = 0.001;
df = 1/tmax;
n = Floor[tmax/(2*dt)];
STable = Table[s[t], {t, 0, tmax - dt, dt}];
ListLinePlot[
Transpose[{Table[i*df, {i, 0, n}],
Take[Abs[Fourier[STable]], n + 1]}],
PlotRange -> {{0, 1}, {0, 14}}, Frame -> True,
FrameLabel -> {"Frequency", "Amplitude"}]
This plot shows the new result along with the original transform.
Increasing tmax in multiples of 2 Pi will further improve the plot.
Some basic properties of Fourier may be found here. Starting with your code again but dropping your set delayed we have
a := 0.1;
L[x_] := 1/(x^2 + 1);
s[x_] := L[x] + a Cos[x];
STable1 = Table[s[t], {t, 0, 40, 0.001}];
ft = Fourier[STable1];
We now need to construct a frequency axis. You have used a time increment of 0.001 and thus this corresponds to a sample rate of 1000 samples per second. The Fourier frequency axis goes between zero and one point less than the sample rate. Thus the frequencies in radians per second corresponding to your data are
sr = 1000;
ff = Table[2 \[Pi] (n - 1) sr/Length[STable1], {n, Length[STable1]}];
Now we can put an axis on your plot
ListLinePlot[Transpose[{ff, Abs[ft]}], PlotRange -> {{0, 5}, {0, 10}},
Frame -> True]
To get more frequency resolution we have to increase the number of points thus
STable1 = Table[s[t], {t, 0, 400, 0.001}];
ft = Fourier[STable1];
ff = Table[2 \[Pi] (n - 1) sr/Length[STable1], {n, Length[STable1]}];
ListLinePlot[Transpose[{ff, Abs[ft]}], PlotRange -> {{0, 5}, {0, 40}},
Frame -> True]
Your delta function is now more clearly visible.
• Just a note: A more appropriate approach for increasing the frequency resolution is probably to perform Fourier on a zero padded version of the sampled signal, i.e., Fourier[PadRight[STable1],FFTsize], instead of increasing the observation interval, which may not be possible. – Stelios Jan 5 '16 at 22:44
• @Stelios Yes I agree; that is a useful method. However, the usual warning should apply that this is equivalent to interpolation and not the calculation of addition information. – Hugh Jan 5 '16 at 22:49
|
2021-05-08 22:58:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43021517992019653, "perplexity": 1515.0792608713073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988927.95/warc/CC-MAIN-20210508211857-20210509001857-00590.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=133&t=27050&p=82528
|
9.13
Volume: $\Delta S = nR\ln \frac{V_{2}}{V_{1}}$
Temperature: $\Delta S = nC\ln \frac{T_{2}}{T_{1}}$
Jessica Wakefield 1H
Posts: 58
Joined: Thu Jul 27, 2017 3:00 am
9.13
During the test of an internal combustion engine, 3.00 L of nitrogen gas at 18.5*C was compressed suddenly (and irreversibly) to 0.500 L by driing in a piston. In the process, the temperature of the gas increased to 28.1*C. Assume ideal behavior. What is the change in entropy of the gas?
I was wondering why the solutions manual uses nRln(V2/V1) when it specifies that it is irreversible. I would have used -PdV for the first process and then added it to the entropy from nRln(T2/T1)
Dylan Davisson 2B
Posts: 50
Joined: Thu Jul 27, 2017 3:00 am
Been upvoted: 2 times
Re: 9.13
The book says that the equation nRln(V2/V1) applies to both reversible and irreversible gas expansions given that it expands between the same two states and is at a constant temperature. But ultimately, this equation can be used in the problem because entropy is a state function. Because entropy is a state function, the summation of the change in entropy of the two reversible reactions, dealing with the change in volume and the change in temperature, will give the correct answer of the total entropy change that occurs in the problem.
This is expressed on page 323 and shown in action with Example 9.5 on page 325 of the textbook.
|
2020-12-06 02:34:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8266841769218445, "perplexity": 568.2695142029922}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141753148.92/warc/CC-MAIN-20201206002041-20201206032041-00710.warc.gz"}
|
https://declaredesign.org/r/randomizr/reference/declare_rs
|
# randomizr
Declare a random sampling procedure.
declare_rs(N = NULL, strata = NULL, clusters = NULL, n = NULL,
prob = NULL, strata_n = NULL, strata_prob = NULL, simple = FALSE,
check_inputs = TRUE)
## Arguments
N The number of units. N must be a positive integer. (required) A vector of length N that indicates which stratum each unit belongs to. A vector of length N that indicates which cluster each unit belongs to. Use for a design in which n units (or clusters) are sampled. In a stratified design, exactly n units in each stratum will be sampled. (optional) Use for a design in which either floor(N*prob) or ceiling(N*prob) units (or clusters) are sampled. The probability of being sampled is exactly prob because with probability 1-prob, floor(N*prob) units (or clusters) will be sampled and with probability prob, ceiling(N*prob) units (or clusters) will be sampled. prob must be a real number between 0 and 1 inclusive. (optional) Use for a design in which strata_n describes the number of units to sample within each stratum. Use for a design in which strata_prob describes the probability of being sampled within each stratum. Differs from prob in that the probability of being sampled can vary across strata. logical, defaults to FALSE. If TRUE, simple random sampling is used. When simple = TRUE, please do not specify n or strata_n. When simple = TRUE, prob may vary by unit. logical. Defaults to TRUE.
## Value
A list of class "declaration". The list has five entries: $rs_function, a function that generates random samplings according to the declaration.$rs_type, a string indicating the type of random sampling used $probabilities_vector, A vector length N indicating the probability of being sampled.$strata, the stratification variable. \$clusters, the clustering variable.
## Examples
# The declare_rs function is used in three ways:
# 1. To obtain some basic facts about a sampling procedure:
declaration <- declare_rs(N = 100, n = 30)
declaration#> Random sampling procedure: Complete random sampling
#> Number of units: 100
#> The inclusion probabilities are constant across units.
# 2. To draw a random sample:
S <- draw_rs(declaration)
table(S)#> S
#> 0 1
#> 70 30
# 3. To obtain inclusion probabilities
probs <- obtain_inclusion_probabilities(declaration)
table(probs, S)#> S
#> probs 0 1
#> 0.3 70 30
# Simple Random Sampling Declarations
declare_rs(N = 100, simple = TRUE)#> Random sampling procedure: Simple random sampling
#> Number of units: 100
#> The inclusion probabilities are constant across units.declare_rs(N = 100, prob = .4, simple = TRUE)#> Random sampling procedure: Simple random sampling
#> Number of units: 100
#> The inclusion probabilities are constant across units.
# Complete Random Sampling Declarations
declare_rs(N = 100)#> Random sampling procedure: Complete random sampling
#> Number of units: 100
#> The inclusion probabilities are constant across units.declare_rs(N = 100, n = 30)#> Random sampling procedure: Complete random sampling
#> Number of units: 100
#> The inclusion probabilities are constant across units.
# Stratified Random Sampling Declarations
strata <- rep(c("A", "B","C"), times=c(50, 100, 200))
declare_rs(strata = strata)#> Random sampling procedure: Stratified random sampling
#> Number of units: 350
#> Number of strata: 3
#> The inclusion probabilities are constant across units.declare_rs(strata = strata, prob = .5)#> Random sampling procedure: Stratified random sampling
#> Number of units: 350
#> Number of strata: 3
#> The inclusion probabilities are constant across units.
# Cluster Random Sampling Declarations
clusters <- rep(letters, times = 1:26)
declare_rs(clusters = clusters)#> Random sampling procedure: Cluster random sampling
#> Number of units: 351
#> Number of clusters: 26
#> The inclusion probabilities are constant across units.declare_rs(clusters = clusters, n = 10)#> Random sampling procedure: Cluster random sampling
#> Number of units: 351
#> Number of clusters: 26
#> The inclusion probabilities are constant across units.
# Stratified and Clustered Random Sampling Declarations
clusters <- rep(letters, times = 1:26)
strata <- rep(NA, length(clusters))
strata[clusters %in% letters[1:5]] <- "stratum_1"
strata[clusters %in% letters[6:10]] <- "stratum_2"
strata[clusters %in% letters[11:15]] <- "stratum_3"
strata[clusters %in% letters[16:20]] <- "stratum_4"
strata[clusters %in% letters[21:26]] <- "stratum_5"
table(strata, clusters)#> clusters
#> strata a b c d e f g h i j k l m n o p q r s t u v
#> stratum_1 1 2 3 4 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
#> stratum_2 0 0 0 0 0 6 7 8 9 10 0 0 0 0 0 0 0 0 0 0 0 0
#> stratum_3 0 0 0 0 0 0 0 0 0 0 11 12 13 14 15 0 0 0 0 0 0 0
#> stratum_4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 16 17 18 19 20 0 0
#> stratum_5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 21 22
#> clusters
#> strata w x y z
#> stratum_1 0 0 0 0
#> stratum_2 0 0 0 0
#> stratum_3 0 0 0 0
#> stratum_4 0 0 0 0
#> stratum_5 23 24 25 26
declare_rs(clusters = clusters, strata = strata)#> Random sampling procedure: Stratified and clustered random sampling
#> Number of units: 351
#> Number of strata: 5
#> Number of clusters: 26
#> The inclusion probabilities are constant across units.declare_rs(clusters = clusters, strata = strata, prob = .3)#> Random sampling procedure: Stratified and clustered random sampling
#> Number of units: 351
#> Number of strata: 5
#> Number of clusters: 26
#> The inclusion probabilities are constant across units.
|
2018-06-18 17:01:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3730400800704956, "perplexity": 3304.3746879684336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860684.14/warc/CC-MAIN-20180618164208-20180618184208-00393.warc.gz"}
|
https://climateaudit.org/2007/11/12/gleanings-on-bona-churchill/?like=1&source=post_flair&_wpnonce=c7a699f16c
|
## Gleanings on Bona Churchill
In 2002, Lonnie Thompson drilled a 460 meter ice core in a col between Mounts Bona and Churchill in Alaska. As of October 2003, they had analyzed over 5600 samples and concluded that the core covered approximately 2500 years. A presentation was made at AGU in December 2004. The data was not discussed in IPCC AR4 or even in Thompson’s 2006 PNAS article. Actually, not only is the data completely unarchived, to date, there is no journal publication whatever of these results (funded by the National Science Foundations Office of Polar Programs grant OPP-0099311).
In mining promotions, whenever results are delayed, you can be 99% sure that they are not good results. Promoters can delay results a little bit hoping that more drilling will get a good hole, but there’s not much discretion. For some time, I’ve noticed the non-reporting of Bona Churchill (which I’ve compared to a similar situation at Sheep Mountain) and surmised that the results were not “good” for Thompson’s viewpoint: otherwise we’d have heard about it. Here’s one such prediction:
Heres my prediction about dO18 levels at Bona Churchill.: 20th century dO18 levels will be more negative (“colder”) than levels in the early 19th century – the opposite pattern to the pattern that Thompson is promoting for tropical glaciers.
While the results remain unpublished, Follow the Money noted a reference to Bona Churchill in a workshop proceeding here, which contained a PPT presentation (81 MB) by Lonnie Thompson, the abstract to which stated:
Records providing the necessary time perspective may be reconstructed from chemical and physical properties preserved in the regional ice cover and ocean sediments. Comparisons are made among the geographically dispersed, annually dated ice cores records from the Antarctic Peninsula, the tropical Quelccaya ice cap (Peru) and Bona-Churchill (southeast Alaska) over the past 500 years. Decadally averaged δ18O histories demonstrate that the current warming at high elevations in mid- to low-latitudes is unprecedented for at least the last two millennia.
The PPT presentation contained an interesting graphic providing the first information on Bona-Churchill $d _{18}O$ levels. And in sufficient detail to test my prediction. How do you think that I did on my prediction?
Just for fun, let’s review the history of the Bona-Churchill project a little. Skip to the end if you want the results without waiting.
Bona-Churchill Background
The purpose of the Bona-Churchill drilling (presumably drawn from the proposal to NSF) is set out on an Ohio State webpage here:
Ice core reconstruction of North Pacific climate variability and environmental history from the Bona-Churchill Ice Field, Alaska. This project is funded by the National Science Foundations Office of Polar Programs (OPP-0099311) and is in its third year.
This project was designed to retrieve and analyze ice cores from the col situated between Mt. Bona and Mt. Churchill (61o 24′ N, 141o 42′ W; 4420 m asl) in Wrangell-St Elias Mountains of southeastern Alaska. These records will fill a significant void in the high resolution climate history of this region. These new ice core records will complement and extend the existing tree ring-based climate records for the region and will add to the suite of high resolution ice core histories now emerging from other north polar ice fields. Global scale syntheses of past climate variability include ice core contributions from both Antarctica and Greenland as well as from ten lower latitude, high altitude sites in Tibet, South America and Africa. To date the unique paleohistories preserved in Alaskas ice fields have not been tapped and thus have not contributed to this global climate synthesis. The sparseness of high resolution climate histories from the northeastern side of the Pacific Basin has been a major obstacle to advancing our understanding of the rapid and recent changes in the dynamical state of the Pacific region and its global teleconnections. The ice cores attained from the Bona-Churchill col will help fill this void and provide critical new insight to the climate history in this region.
Our scientific objectives for the Bona-Churchill cores include:
(1) Assess whether the warming of the last 30 years that appears to be amplified at high elevations in the tropics and subtropics extends to northwestern North America;
(2) Assess the character of the most recent step change in the dynamics of the Pacific Basin climate regime that occurred in 1976-77 and explore whether similar abrupt transitions have occurred in the past and if so, determine when and of what magnitude were the changes;
(3) Explore whether the recently identified multi-decadal ENSO-like mid-latitude climate variability has its roots in the tropical Pacific;
(4) Determine the bottom age of the ice on Bona-Churchill col; and
(5) Determine whether Mt. Churchill is indeed the source of the White River Ash.
By October 2003, Thompson’s group reported that they had completed over 5600 samples from over 75% of the core, which was estimated to cover 2500 years:
The analyses of the Bona-Churchill ice cores are now underway in the laboratories at OSUs BPRC. The primary measurements that are being made continuously along the length of all cores include the concentration and size distribution of insoluble microparticles (dust), δ18O, δD, and concentrations of the major anion and cation species. The upper sections of the cores have been analyzed for total Beta radioactivity. The annual accumulation rate has averaged ~1100 mm of water equivalent over the recent past. As of October 2003 we have analyzed 5600 δ18O, δD, dust and chemistry samples representing 320 meters of the 460-meter deep ice core. The dust and calcium concentrations show distinct annual variations and the preliminary results suggest that the annually resolved record will cover more than 2500 years. This bodes well for the recovery of a very high-resolution record of past climatic and environmental variability from these cores.
In an interview in May 2004, Thompson noted high levels of potassium in snow layers from the 1960s, which he thought might have come from China (given that potassium is an important fertilizer, it’s interesting to think of it being airborne at higher altitudes in the context of high-altitude tree ring growth):
A second surprise in the Bona-Churchill ice core is the high level of potassium found in snow layers from the 1960s.
Potassium is not something you usually find in an ice core, Thompson said.
His guess is that the potassium is from China, where wind lifted it from fertilized dry fields and carried it across the Pacific to settle in Alaska and elsewhere in North America. Thompson said he doesnt know if the level of potassium is high enough to boost the growth of Alaska plants.
A third question raised by the Bona-Churchill ice core is why the ice from 1,500 feet down in the core fell as snow in 1,000 B.C., which suggests that no ice existed on the mountain before then. The current theory of Alaskas past is that glaciers have covered at least the mountainous parts of the state for at least 12,000 years, when the last Ice Age started to wane. Though Thompson said volcanic heat might have affected the Bona-Churchill ice core, it shows no evidence of this.
Is it possible that the ice totally disappeared, and these glaciers are a function of climate in the last 5,000 years, in the cooling period that started in the middle of the Holocene (the last 11,000 years of Earths history)? Thompson said. To me, its a very important story to unravel.
In December 2004, there were two presentations at the AGU Fall Meeting. In one presentation, they stated:
In 2003, six ice cores measuring 10.5, 11.5, 11.8, 12.4, 114 and 460 meters were recovered from the col between Mount Bona and Mount Churchill (61° 24’N; 141° 42’W; 4420 m asl). These cores have been analyzed for stable isotopic ratios, insoluble dust content and concentrations of major chemical species. Total Beta radioactivity was measured in the upper sections. The 460-meter core, extending to bedrock, captured the entire depositional record at this site where ice temperatures ranged from -24° C at 10 meters to -19.8° C at the ice/bedrock contact. The shallow cores allow assessment of surface processes under modern meteorological conditions while the deep core offers a ∼1500-year climate and environmental perspective. The average annual net balance is ∼~1000 mm of water equivalent and distinct annual signals in dust and calcium concentrations along with δ 18O allow annual resolution over most of the core. The excess sulfate record reflects many known large volcanic eruptions such as Katmai, Krakatau, Tambora, and Laki which allow validation of the time scale in the upper part of the core. The lower part of the core yields a history of earlier volcanic events. The 460-m Bona-Churchill ice core provides a detailed history of the `Little Ice Age’ and medieval warm periods for southeastern Alaska. The source of the White River Ash will be discussed in light of the evidence from this core. The 460-m core also provides a long-term history of the dust fall that originates in north-central China. The annual ice core-derived climate records from southeastern Alaska will facilitate an investigation of the likelihood that the high resolution 1500-year record from the tropical Quelccaya Ice Cap (Peru) preserves a history of the variability of both the PDO and the Aleutian Low.
In a second presentation, they reported:
The White River Ash (WRA) is a well-documented bi-lobate Plinian deposit covering as much as 540,000 square km of the Yukon Territory, Canada and adjoining eastern Alaska. Recent studies have identified the source of the ash as Mount Churchill in the St. Elias Mountains of southeastern Alaska by comparing pumice deposits from the summit area of Mount Churchill with more distal pumice deposits of the WRA (e.g. McGimsey et al., 1990; Richter et al., 1995). In spring 2002 a team from The Ohio State University’s (OSU) Byrd Polar Research Center recovered a 460-m long ice core drilled to bedrock in the col (elevation 4420 masl) between Mount Churchill and Mount Bona (4 km southwest) to reconstruct a proxy climate history for the region. This core is also ideal to assess whether Mount Churchill is the source of the WRA. No evidence of a visible ash layer was encountered during drilling. Borehole temperatures of -24 degrees C at 10m depth and -19.8 degrees C at the ice-bedrock interface indicate the glacier is frozen to its bed. After being returned frozen to OSU the core was cut into 12,162 samples that were analyzed for stable isotopic ratios, insoluble particles and soluble chemistry. A preliminary time scale was developed using annual variations in oxygen isotopes, dust and calcium concentrations, beta-radioactivity (bomb horizons) and well-documented historic volcanic eruptions. The ∼1500 year long record shows elevated sulfate values at ∼803AD possibly associated with the second of two eruptions in the past 2000 years that produced the eastern lobe of the WRA deposit. The paleoclimate records appear to be stratigraphically continuous and show no evidence of a depositional hiatus. The absence of an ash layer in the core suggests that the WRA deposit requires further investigation, and the source and age of the WRA will be addressed.
Fisher and Mount Logan
Around the same time as Thompson was drilling the Bona-Churchill ice field, David Fisher and associates were drilling a similar legnth of core at the Eclipse ice field near Mount Logan.
Two cores (345 and 130 m) were recovered in 2002 (Table I).
They submitted a manuscript on May 5, 2005, revised and accepted Jan 13, 2006, published in Sept 2006. Fisher et al showed an interesting graphic (previously discussed here) which showed a dramatic and sharp drop in the 1840s (as well as a sharp change ~800 AD held to signal the start of the MWP.
Original Caption: FIGURE 3. (A) The d18O for PRCol (5 340 m asl) and the dD for Eclipse (3 017 m asl) ice core sites, smoothed with a 5-years low pass filter. At PRCol there is an abrupt shift in d18O of about 3 ca. A.D. 1840, that is not evident in the Eclipse record. The older NWCol Logan core also has a similar shift at the same date.We suggest that prior to A.D. 1840 the moisture flow was predominantly zonal with North Pacific sources of water, and after A.D. 1840 the flow was mostly “modern” delivering moisture from more southerly sources. The higher site receives relatively much more distant southern warm-source moisture than the lower. Compare the A.D. 1840 shift to that of A.D. 1976. (B) The deuterium excess plot for PRCol, indicating a major shift of moisture source ca. A.D. 1840. The larger excess points to warmer source oceans providing the moisture. (C) A plot of ENSO strength statistics implying that a regime shift occurred in the mid-19th century.
Fisher et al hypothesized a re-arrangement of hemispherical wind circulation patterns in the 1840s, taking place over only a few years, changing the moisture source and thus dO18 values.
The synoptic situation that would go along with the shift is that a deeper more northwest-centred Aleutian Low would draw moisture from farther south.Comparison of stable isotope series over the last 2000 years and model simulations suggest sudden and persistent shifts between modern (mixed) and zonal flow regimes of water vapour transport to the Pacific Northwest. The last such shift was in A.D. 1840. Model simulations for modern and “pure” zonal flow suggest that these shifts are consistent regime changes between these flow types, with predominantly zonal flow prior to ca. A.D. 1840 and modern thereafter. The 5.4 and 0.8 km asl records show a shift at A.D. 1840 and another at A.D. 800. It is speculated that the A.D. 1840 regime shift coincided with the end of the Little Ice Age and the A.D. 800 shift with the beginning of the European Medieval Warm Period. The shifts are very abrupt, taking only a few years at most.
Bona-Churchill Results
Here is a graphic showing dO!8 from Bona-Churchill from the workshop proceedings (Thompson and Moseley-Thompson 2006):
The dO18 history is a bit different from the Mount Logan history in that the 20th century decline is more mooted; squinting at the graphic, one can perhaps discern a small decline between the two centuries. However, the visual impression is that there has been negligible change between the 19th and 20th centuries in dO18 values and that there were “warmer” values from about 1350-1600. Does it mean anything? Who knows. If one thought that there had been strong warming in Alaska in the 20th century (and there is evidence of this), then one could hardly say that the dO18 histories were a useful proxy for this warming.
Thompson and Moseley-Thompson say of this data:
Decadally averaged δ18O histories demonstrate that the current warming at high elevations in mid- to low-latitudes is unprecedented for at least the last two millennia.
The U-word again. But is there any evidence of this from the Bona-Churchill δ18O history showing unprecedented warming? I can’t see any. Maybe you need to be a dendrochronologist to see it. Again, please note that the question here is not whether there is or isn’t “unprecedented” warming, but whether the Bona-Churchill δ18O history provides any evidence of unprecedented warming? I think not.
I think that my prediction for Bona-Churchill was pretty good: there’s nothing here that “helps” Thompson’s story that there are higher ice core dO18 values in the 20th century show global warming.
References:
D.A. FISHER*, C.WAKE, K. KREUTZ, K.YALCIN, E. STEIG, P. MAYEWSKI, L. ANDERSON, J. ZHENG, S. RUPPER, C. ZDANOWICZ, M. DEMUTH, M. WASZKIEWICZ, D. DAHL-JENSEN, K. GOTO-AZUMA, J.B. BOURGEOIS, R.M. KOERNER, J. SEKERKA, E. OSTERBERG, M.B. ABBOTT, B.P. FINNEY and S.J. BURNS; STABLE ISOTOPE RECORDS FROM MOUNT LOGAN, ECLIPSE ICE CORES AND NEARBY JELLYBEAN LAKE.WATER CYCLE OF THE NORTH PACIFIC OVER 2000 YEARS AND OVER FIVE VERTICAL KILOMETRES: SUDDEN SHIFTS AND TROPICAL CONNECTIONS, Géographie physique et Quaternaire, 2004, vol. 58. url
Lonnie G. Thompson, Ellen Mosley-Thompson 2006. Glaciological evidence for abrupt climate change: past and present. NSIDC: An International Workshop: Antarctic Peninsula Climate Variability: Observations, Models, and Plans for IPY Research. ftp://sidads.colorado.edu/pub/ppp/IPY-APCV/LonnieThompsonWorkshop.pps
1. Harry Eagar
Posted Nov 12, 2007 at 10:29 AM | Permalink
Not off topic but an minor expansion of one point: potassium in dust from China is credited with making plant life possible on Kauai in the Hawaiian islands.
Kauai is very rainy, which leaches out minerals. And famously green.
However, this has been going on for millions of years, not just in the 1960s.
I don’t have my reference to this handy, but if anyone cares I can look it up.
2. Sam Urbinto
Posted Nov 12, 2007 at 10:29 AM | Permalink
Surprise, surprise.
3. Harry Eagar
Posted Nov 12, 2007 at 10:30 AM | Permalink
Not off topic but a minor expansion of one point: potassium in dust from China is credited with making plant life possible on Kauai in the Hawaiian islands.
Kauai is very rainy, which leaches out minerals. And famously green.
However, this has been going on for millions of years, not just in the 1960s.
I don’t have my reference to this handy, but if anyone cares I can look it up.
4. Dana H.
Posted Nov 12, 2007 at 10:36 AM | Permalink
Note the careful phrasing here: “Decadally averaged ä18O histories demonstrate that the current warming at high elevations in mid- to low-latitudes is unprecedented…” Presumably, Alaska is high-latitude, so no statement is being made here about the Bona-Churchill results; they must be referring to other results here.
As much as anything, this reinforces your main point. The attitude seems to be, “The Bona-Churchill data don’t support the preferred narrative and/or our hypothesis that δ18O levels are a good temperature proxy, so we will simply refuse to draw conclusions from the data.”
5. L Nettles
Posted Nov 12, 2007 at 10:42 AM | Permalink
Not off topic but a minor expansion of one point: potassium in dust from China is credited with making plant life possible on Kauai in the Hawaiian islands.
Harry seems to be referring to this.
1999
Chadwick, O.A., L.A. Derry, P.M. Vitousek, B.M Huebert, and L.O. Hedin. Changing sources of nutrients during four million years of ecosystem development.
Nature 397: 491-497
6. Crispytoast
Posted Nov 12, 2007 at 10:43 AM | Permalink
Is 61 degrees considered mid-latitude?
7. John A
Posted Nov 12, 2007 at 10:51 AM | Permalink
If I remember rightly, Plinian refers to volcanic eruptions such as the one that overwhelmed Pompeii in AD 79. So there would have been an enormous ashcloud towering over Mount Churchill, which would have collapsed all over the local area.
So where is the evidence of the ashfall? It’s not in the core even though much more distant volcanic events are “recorded” in the ice core. Something is clearly wrong with the hypothesis.
There are clearly volcanic events recorded in the ice core record (including one in the 1500s that must have been close by).
As for the Thompson claims of “unprecedented” warming of the 20th Century, we’ll have to file them with the Mannian claims to robustness – under “I” for “Imaginary”
8. Larry
Posted Nov 12, 2007 at 11:05 AM | Permalink
I would think that a more logical explanation for K in very fine ash form would be from burning firewood (and possibly forest fires, as well). This, of course, would have started long before the 20th century, but probably increased as the population in China, India, and Siberia increased. I don’t know whether to expect it to be on the increase or decrease currently.
Wood ash is very high in K (ever heard of potash?).
9. MarkW
Posted Nov 12, 2007 at 11:08 AM | Permalink
JohnA,
“Volcanic event” could just refer to rising magma that heated the ground, maybe even vented some steam, but did not result in an eruption that would have procudec large amounts of ash.
10. windansea
Posted Nov 12, 2007 at 11:09 AM | Permalink
Steve McIntyre
http://www.climateaudit.org/?p=2335#comment-159965
11. Bernie
Posted Nov 12, 2007 at 11:13 AM | Permalink
Steve:
Is it worth another email to Thompson about Bona Churchill, now that you have some indication that the d18O results are somewhat at odds with his summary of the earlier ice cores.
Also, what does Fisher say? Surely he has been in contact and discussions with Thompson? His results appear to reinforce the lack of a temperature signal from the BC ice core.
The 1840 shift looks pretty dramatic. A good thing Gore was not around then!
12. John A
Posted Nov 12, 2007 at 11:13 AM | Permalink
MarkW: the eruption producing the White River Ash was described as “Plinian” – which refers to a specific type of volcanic event.
See this link at the USGS for more.
13. steven mosher
Posted Nov 12, 2007 at 11:15 AM | Permalink
http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0099311
This one right. I havent been through the Grant requirements, I think you looked
at this is the past however, right?
14. Steve McIntyre
Posted Nov 12, 2007 at 11:26 AM | Permalink
#13 I notice that the grant period expired in 2006 – without any publication.
15. Steven mosher
Posted Nov 12, 2007 at 11:45 AM | Permalink
RE 14. I think the PM should probably get a letter.
I started to go through grant requirments but They have been changing almost biannually.
16. crosspatch
Posted Nov 12, 2007 at 12:20 PM | Permalink
“So where is the evidence of the ashfall?”
It depends on where you look. There were two primary White River Ash (WRA) eruptions. During the first the wind was from the South causing a Northerly distribution while the winds were from the West during the second eruption causing a Easterly distribution of the tephra.
17. Steve Moore
Posted Nov 12, 2007 at 12:37 PM | Permalink
…for at least the last two millennia.
Maybe the original notes said “last 200 years” and there was an error in transcription.
18. Fred
Posted Nov 12, 2007 at 12:55 PM | Permalink
That’s not a hockey stick.
I play hockey and that is not used in my game.
That’s a pool cue.
Just depends on what end you look from.
19. MattN
Posted Nov 12, 2007 at 1:14 PM | Permalink
I want to make sure I have this perfectly clear.
The actual observed data shows virtually zero difference between 19th and 20th century dO18 levels, and they are, in fact, lower than levels 500ish years ago, yet somehow this proves unprecedented warming over at least the last 2000 years?
Do I have that right?
Steve: This is not the only data in the presentation. It is excluded from Thompson’s composite.
20. richardT
Posted Nov 12, 2007 at 1:27 PM | Permalink
Presentation implies that a MS is in press with PNAS. They should be fairly quick.
21. Bernie
Posted Nov 12, 2007 at 1:31 PM | Permalink
The reality is that, just as in the case of tree rings, d18O is something that can be measured apparently to a fairly high level of precision. However, it remains unclear what signals (and what noise) are contained in this measure. One can imagine all sorts of factors and interaction of factors influencing the level of d18O and based on other articles Thompson et al are aware of them. What is troubling is that Thompson (and Mann) appear to assume that temperature is the dominant signal. What the BC data shows in part is that this emphasis on the temperature signal results in problems for either the temperature record or the strength of the temperature signal.
Does anyone have any explanation for the apparent silence on the issues around the validity of this measure?
Again and again, the image that springs to mind is of the drunk looking for his car keys under the street light.
22. Steve McIntyre
Posted Nov 12, 2007 at 1:31 PM | Permalink
#20. This was a May 2006 presentation. Thompson PNAS 2006 icame out in the summer and doesn’t cover any of this information. No mention of Bona-C
23. nanny_govt_sucks
Posted Nov 12, 2007 at 1:42 PM | Permalink
Is it possible that the ice totally disappeared, and these glaciers are a function of climate in the last 5,000 years, in the cooling period that started in the middle of the Holocene (the last 11,000 years of Earths history)? Thompson said. To me, its a very important story to unravel.
More evidence of a warmer-than-today Holocene? This seems to go along with the evidence for an ice-free arctic around the same period.
24. Derek Tipp
Posted Nov 12, 2007 at 1:53 PM | Permalink
What is the excuse from Thompson for not publishing the information? Surely if he has been given taxpayers money, someone is responsible for getting value for money. Maybe a politician should investigate?
25. Bernie
Posted Nov 12, 2007 at 1:55 PM | Permalink
Steve McI:
Did Fisher archive his data? The article does not seem to mention it. Does the Geological Survey of Canada make the data available??
Fisher also references a 2004 abstract from Thompson on the annual climate and environmental variability at Bona Churchill. Given the geographic proximity I assume that Fisher would have been interested in sharing detailed results. Wouldn’t Thompson have been equally interested in checking FIsher’s data? Curiouser and curiouser. Bernie
Steve: Fisher sent data to me when I requested it.
26. John A
Posted Nov 12, 2007 at 1:57 PM | Permalink
Crosspatch:
It depends on where you look. There were two primary White River Ash (WRA) eruptions. During the first the wind was from the South causing a Northerly distribution while the winds were from the West during the second eruption causing a Easterly distribution of the tephra.
So why is there no evidence of ashfall directly below the mountain. Are the winds that strong in that part of Alaska?
27. Keith Herbert
Posted Nov 12, 2007 at 3:19 PM | Permalink
I read in the glossary dO18 is change in ogygen isotope level. Is this being used as a proxy for warming? If so, then it seems the the Bona Churchill graph does not support the claim of warming in the 20th century as Steve stated. And in fact, I would think the most interesting period for a climatologist (should they value this information) would be the period from 1600 to 1750 where the dO18 deceases significantly before starting the gradual increase again.
28. Bernie
Posted Nov 12, 2007 at 3:29 PM | Permalink
#25
What does Fisher think of the Bona Churchill results? Will he discuss? The NH high altitude ice cores appear to be tracking – with both saying no temperature related signal.
Any clue or referneces as to the link between solar activity and d18O?
29. steven mosher
Posted Nov 12, 2007 at 5:02 PM | Permalink
RE 24..
the grants may or may not require publication of the data. There is a requirement for ANNUAL REPORTS
which are given in accordance with a standard form and a FINAL report as well. There may be no
requirements to provide the data to the contracting agency. The actual award may spell this out.
The person that would know this would be the PM. The PM will side with Thompson since Thompsen has
made the PMs life meaningful.The PM may be able to decide that the source data is not important and
leave it under the control of the PI. I’m looking through the docs as I get time.
30. Geoff Sherrington
Posted Nov 12, 2007 at 6:17 PM | Permalink
In simple terms that can be criticised, the production of oxygen isotope anomalies is commonly attributed to fractionation during evaporation of warm versus cold waters, then precipitation at the cold site being measured.
If the depositing winds can change direction (as is theorised by the absence of certain volcanic ash) then surely they can change from a warm water source to a cold water source. If they do this frequently, could this not remix the oxygen isotopes? Could this not produce a result close to null change? A valid inference is that the utility of ice core thermometry is site-specific and that quantitative comparisons from place to place on the globe are simply naive. As is the qualitative hypothesis for original evaporative fractionation, especially when it is turned into a quantitative proxy for temperature.
I agree with # 21 Bernie.
Re potassium levels in ice core. Essentially all plants need potassium at some level. There is enough potassium in most rock types to be non-limiting to growth in most places. Rocks weather in rain and air and make potassium available to plant roots. The potassium circulating in the atmosphere (major volcanic episodes excepted) would be expected to be both soluble and miniscule through precipitation in rain. It’s an observation that needs checking and it might lead to an advancement in Science, but not all goods are made in China yet.
Steve: the White Mt BCPs are typically in dolomite and I would presume very K-deficient. There’s nice picture showing a geological contact through the change in vegetation from BCP (dolomite) to big sagebrush (sandstone).
31. Larry
Posted Nov 12, 2007 at 7:47 PM | Permalink
30, that’s correct. The amount of heavy water (D2O, D2O18, H2O18, etc.) is a function of the temperature where the water condensed. That could be a considerable distance from where it landed, especially if it’s snow. And it would strongly depend on the cloud hight. This seems like a pretty loose relationship. And that’s not even accounting for the fact that the amount in the atmosphere is a function of the temperature where it vaporized, which could be over a pretty wide area.
32. crosspatch
Posted Nov 12, 2007 at 8:45 PM | Permalink
“So why is there no evidence of ashfall directly below the mountain. Are the winds that strong in that part of Alaska?”
No idea but there is evidence of ashfall in other areas relatively nearby. According to Thompson himself in this article, he says that the source of the ash much be someplace else because he got all the way down to where there was no ice on that mountain and found no ash.
33. Posted Nov 12, 2007 at 10:55 PM | Permalink
Re #24 (Derek Tipp) and #29 (steven mosher).
There are rules about data availability, but there is no enforcement mechanism. For details of how the NSF non-enforces those rules, see
http://www.climateaudit.org/?p=1443
34. Roger Dueck
Posted Nov 13, 2007 at 12:04 AM | Permalink
#32 Dr Thompson does not think like a geologist. He found granodiorite, presumably from uplifted deep-seated igneous intrusives penetrated by Mt Churchill, near the crater ie upslope from the ice mass. This could easily explain the presence of GD in the ice, down-slope and at the base of the ice, eroded from the original mass. The tephra from the ignimbrite would have been in the thousands of degrees when deposited on the ice. It was not preserved! It disappeared in the ensuing flood, as did Mt. St. Helens ashfall on the icefields. I would invoke Occam’s razor and suggest he look for the simple explaination.
35. Geoff Sherrington
Posted Nov 13, 2007 at 12:16 AM | Permalink
Re # 30 and not all so important,
Of the three major plant nutrient elements N,P and K, P is commonly the most soluble and as such moves sometimes large dustances laterally in groundwater. It can limit the growth of plants but I cannot recall plants not growing at all because of K deficiency alone, though there will no doubt be occasional places where that will happen. Yes, you can see geobotanical changes with K deficiency, but the best visual examples are from P. Phosphate is usually comparatively insoluble in solis and rocks as you know.
If the K anomaly reported in the core is from wind-blown potassium fertiliser, then I would expect also an anomaly in phosphate, which should travel together from source but be more persistent than K. The phosphate anomaly would have a probability of association with a Ca anomaly on the micro scale, but there are other confounding Ca sources at macro scale.
36. Geoff Sherrington
Posted Nov 13, 2007 at 12:52 AM | Permalink
Correction to # 35 line 1. K is the most soluble, not P. Typo. I type terribly. Also, Line 6, “solis” should be “soils”.
Apologies Geoff.
37. Hans Erren
Posted Nov 13, 2007 at 1:39 AM | Permalink
re 24:
What is the excuse from Thompson for not publishing the information? Surely if he has been given taxpayers money, someone is responsible for getting value for money. Maybe a politician should investigate?
It’s even dangerous. Did he split the cores and kept the other halves off site. If a tornado hits his freezer he can’t go back to kilimanjaro to get fresh ice because the ice is gone.
Therefore:
DR THOMPSON I URGE YOU TO PUBLICALLY ARCHIVE A COMPLETE DIGITAL ICECORE LOG, AS FREQUENTLY REQUESTED
38. John A
Posted Nov 13, 2007 at 3:11 AM | Permalink
Roger Dueck:
The tephra from the ignimbrite would have been in the thousands of degrees when deposited on the ice. It was not preserved! It disappeared in the ensuing flood, as did Mt. St. Helens ashfall on the icefields. I would invoke Occams razor and suggest he look for the simple explaination.
That would mean that there is something fundamentally wrong with the dating since Thompson claims that the ice field predates the last major eruption (c. 1250 years BP). If the ice field was hit by large amounts of hot ash and its not there now, then there much have been a large lahar which would have swept everything away from the mountain.
39. Robinson
Posted Nov 13, 2007 at 5:00 AM | Permalink
Does anyone think it perhaps more likely that data is withheld (or at least hidden) by scientists not because they are part of some big conspiracy, but because they wish to maintain exclusive access to that data for future papers and/or discoveries? It strikes me that the competitive nature of grant funding would be a very great incentive to keep original work to yourself, rather than to share it with other scientists.
Just a thought.
40. Dodgy Geezer
Posted Nov 13, 2007 at 6:26 AM | Permalink
“…Does anyone think it perhaps more likely that data is withheld (or at least hidden) by scientists not because they are part of some big conspiracy, but because they wish to maintain exclusive access to that data for future papers and/or discoveries?…”
I think it more likely that they are ‘riding the tiger’. They did well out of pinning their reputations on the Global Warming hypothesis early, before it was able to be verified, and then the world’s media and politicians followed them. That left them ‘forced’ to keep the ball rolling.
There is now too much at stake to simply turn around and say ‘Whoops, I was wrong!’. Indeed, much of the push now is in media, political and commercial areas where scientists do not have much influence. All Mann et al can do is hang on tight, and hope that when the inevitable crash comes, people will be too busy blaming Gore, or trying to hide their own unthinking suport, to descend on those who started it.
If I were a ‘Warmist’ now, I would be trying to suppress any indication that I was wrong, and trying to get other people to join in supporting the thesis, while quietly saying a few things which could later be quoted to show that I didn’t ‘really’ support it unquestioningly….
41. DaveR
Posted Nov 13, 2007 at 8:08 AM | Permalink
#41 There are many mundane reasons why data sets don’t get rapidly used in publications. I’ve never met one that has anything to do with the conspiracy theories about being “found out” that are popular on this website. Some of the most common reasons include lack of manpower or unexpected difficulties with the data quality. My own research group operates a number of geophysical instruments. But even if they gather data, without funds to support students or post-docs to work on the data analysis progress will be very slow or may even stop. Tough, but that’s the way it is when university science is casualised and relies on students and contract pos-docs. Sometimes you get datasets which turn out to be much more difficult to work with than anticipated. Examples include interference causing deterioration in accuracy, instruments not working properly, human errors in making records etc – basically a mixture of human foul ups and the cussed nature of the real world. Often, useful work can still be extracted from these data sets, but it can be much harder work and decisions have to be made as to where to direct scarce resources. In my own work this has led to some instruments/grants producing much more output (papers) than others. But it’s nothing sinister. It’s just the way the game works.
42. Bernie
Posted Nov 13, 2007 at 8:34 AM | Permalink
Steve:
It seems like comments are being dropped or snipped even when they are on thread. Is there something up?
43. DAV
Posted Nov 13, 2007 at 9:03 AM | Permalink
#40, Robison.
Well, yes and no. It only seems fair that if someone were to collect data they should be allowed first stab at analyzing it. No one wants to do field work only to be scooped by the guy who sat around at home. A reasonable non-disclosure period should be allowed. How much should depend on how long an analysis takes. Usually, 1 to 2 years in some disciplines.
#29, steven mosher
Yeah but unfortunately that sets up the current situation where it’s extremely difficult for anyone attempting replication. I can see how it happened though. At one time, climate was in the who-really-cares category and most of the research was only of interest to a very few.
Today, of course, things have changed. Now it’s imperative to know the quality and correctness of the work. It’s going to be hard to fix things. It may be too late. Even if a concerted effort to revisit old analyses were started (ala, M&M), there would be howls and accusations from both sides with vested interest in the outcome. And who would actually do it? There really is some specialized knowledge that would require cooperation from the paleoclimatologists.
But they’re very likely to see any revalidation as impugning their integrity (Yeah, I know this is going to generate a lot of smart-ass remarks but put yourself in their shoes, people, and ask how YOU would feel and react). Maybe, in the long run, Mann has really helped in this area because now it’s possible to point to his work and say “Look! Mistakes can and have been made! Let’s make sure there are no others. We really need to know.”
44. kim
Posted Nov 13, 2007 at 9:41 AM | Permalink
Bernie #43, Steve is the only editor I will accept. He deleted a comment of mine expressing concern for the reaction against science if and when AGW is exposed as a hoax.
===================================
45. Steve McIntyre
Posted Nov 13, 2007 at 9:55 AM | Permalink
I’ve moved various venting comments to Unthreaded. Why should a thread on Bona Churchill be used for extravagant claims about AGW motives – which I try to discourage in the first place.
#40. You’re conflating a couple of issue. I’ve never suggested that all data archiving failures pertain to “sinister” motives. Practically I think that most non-archiving is for proprietary reasons. However the U.S. climate change program has longstanding policies requiring researchers to promptly archive data. If present policies were enforced, I’d be fine with that.
The delay in publishing results is a different issue – distinguishing between archiving data and writing a journal article. In mining promotions, if promoters have good results, they find a way of getting them out fast; while they delay bad results hoping that they’ll get some good results. It’s human nature; but there are limits on the delay.
In the cases of Bona-Churchill and Sheep Mt, I believed that the authors would have fond a way to get the results into play if they had been “good” results and that, in these cases, the publishing delays were suspiciously long. I’m not saying this after the fact. I predicted “poor” results for both proxies a long time ago and my predictions are being vindicated.
You give a good reason why Sheep Mt and Bona Churchill haven’t been published. Or why Hughes didn’t use the updated Sheep Mt data in Salzer and Hughes.
46. Michel Le Normand
Posted Nov 13, 2007 at 10:33 AM | Permalink
This post should be perhaps transferred to Unthreaded but it starts from 2 posts on this thread and concerns a true question of audit I have no response to, perhaps of an ill formulated previous post.
Geoff S said in #30
In simple terms that can be criticised, the production of oxygen isotope anomalies is commonly attributed to fractionation during evaporation of warm versus cold waters, then precipitation at the cold site being measured.
And Larry in #31
30, thats correct. The amount of heavy water (D2O, D2O18, H2O18, etc.) is a function of the temperature where the water condensed.
These two positions are IMHO contradictory, but the second is the commonly admitted, though I think it is false. If it were true one would observe more heavy isotopes in ice during the cold period and this proxy should be for cold and frozen zones climate. Yet it is the contrary which is observed. So the stable isotopes deficits are proxies for evaporation zones, principally ocean surfaces (SST), a more global proxy indeed.
Congratulation for Steve and the Blog
47. Michel Le Normand
Posted Nov 13, 2007 at 10:39 AM | Permalink
This post should be perhaps transferred to Unthreaded but it starts from 2 posts on this thread and concerns a true question of audit I have no response to, perhaps of an ill formulated previous post.
Geoff S said in #30
In simple terms that can be criticised, the production of oxygen isotope anomalies is commonly attributed to fractionation during evaporation of warm versus cold waters, then precipitation at the cold site being measured.
And Larry in #31
30, thats correct. The amount of heavy water (D2O, D2O18, H2O18, etc.) is a function of the temperature where the water condensed.
These two positions are IMHO contradictory, but the second is the commonly admitted, though I think it is false. If it were true one would observe more heavy isotopes in ice during the cold period and this proxy should be for cold and frozen zones climate. Yet it is the contrary which is observed. So the stable isotopes deficits are proxies for evaporation zones, principally ocean surfaces (SST), a more global proxy indeed.
Congratulations for Steve and the Blog
48. Bernie
Posted Nov 13, 2007 at 10:40 AM | Permalink
I have no problem with editing out venting and speculations about motives, but my last comments raised what I thought were reasonable questions about the way ice core data can and should be aggregated together in a scientifically defensible manner – i.e., Is Thompson
chart that combines Peruvian and Tibetan data reasonable or is it cherry picking? If the isotope metric is subject to dramatically different mechanisms in different geographic locations then they should only be aggregated with great care, if at all, and with an explicit explanation as to why and how – kind of like bristle cone pines TRs. I am now reading the earlier threads which address this subject. It just seems that this thread naturally calls for a restatement of relevant major points from the earlier threads on the usefulness of d18O as a temperature proxy.
Steve: I moved it to the Gore Thermometer post where it belonged. What did it have to do with Bona Churchill? I move more than I delete.
49. Neil Fisher
Posted Nov 13, 2007 at 3:41 PM | Permalink
DaveR says:
My own research group operates a number of geophysical instruments. But even if they gather data, without funds to support students or post-docs to work on the data analysis progress will be very slow or may even stop. Tough, but thats the way it is when university science is casualised and relies on students and contract pos-docs. Sometimes you get datasets which turn out to be much more difficult to work with than anticipated. Examples include interference causing deterioration in accuracy, instruments not working properly, human errors in making records etc – basically a mixture of human foul ups and the cussed nature of the real world. Often, useful work can still be extracted from these data sets, but it can be much harder work and decisions have to be made as to where to direct scarce resources. In my own work this has led to some instruments/grants producing much more output (papers) than others. But its nothing sinister. Its just the way the game works.
I am not a scientist, yet this seems like a strange “excuse” to me. At this point in time, archiving digital versions of “raw” data and making it available on-line is relatively trivial. I’m guessing that you’d want a digital copy in any case for ease of (your own) use, and while I certainly appreciate cost constraints in the actual analysis of the data, and even the desire to keep “interesting” (and difficult/costly to produce) data private until you can publish something on it, surely at some point – say 1-2 years after collection – you will have a good idea whether you will be spending the time and effort on that data. Given the trivial costs associated with making that sort of data publicly available, and the ethical obligation to provide data for replication of your papers (if any) based on this data, it seems extremely odd to me that Steve has been able to find a number of cases where such data is *not* archived publicly, even after 20 odd years!
50. trevor
Posted Nov 13, 2007 at 4:11 PM | Permalink
Re #41 Dave R:
There are many mundane reasons why data sets dont get rapidly used in publications. Ive never met one that has anything to do with the conspiracy theories about being found out that are popular on this website. Some of the most common reasons include lack of manpower or unexpected difficulties with the data quality. My own research group operates a number of geophysical instruments. But even if they gather data, without funds to support students or post-docs to work on the data analysis progress will be very slow or may even stop. Tough, but thats the way it is when university science is casualised and relies on students and contract pos-docs. Sometimes you get datasets which turn out to be much more difficult to work with than anticipated. Examples include interference causing deterioration in accuracy, instruments not working properly, human errors in making records etc – basically a mixture of human foul ups and the cussed nature of the real world. Often, useful work can still be extracted from these data sets, but it can be much harder work and decisions have to be made as to where to direct scarce resources. In my own work this has led to some instruments/grants producing much more output (papers) than others. But its nothing sinister. Its just the way the game works.
All very plausible Dave, and I am convinced that there are many reasons why data sets cannot be archived (some for 20 years). But surely you agree that if the data sets are not available, the work cannot be replicated, and therefore cannot be confirmed as sound science.
Scientists have a choice:
EITHER: Comply with sound scientific practice, particularly supporting proper data archiving, disclosure of methods etc, and have your work taken seriously.
OR: Fail to disclose date, methods. Obfuscate. Refuse to participate in rational discussion regarding their work. Refuse to support replication. The inevitable consequence of this approach (as I suspect Michael Mann, James Hansen and Phil Jones et al are rapidly finding out) is that your work will not (cannot) be taken seriously. That in turn must jeopardise continued access to grant funding as funding organisations come under pressure to enforce their rules relating to these matters.
Which camp do you want to align with Dave R?
51. MarkW
Posted Nov 13, 2007 at 4:25 PM | Permalink
DaveR,
If the reason why scientists are reluctant to archive data is because quality issues have been discovered with the data, then what does this say regarding the reports that were generated using this same data?
52. Sam Urbinto
Posted Nov 13, 2007 at 4:35 PM | Permalink
Work has to be replicable. It’s not. Why not? Because the data is not available. Why not? I can see not releasing it until your material is published, but after? No. 20 years after? Heck no.
But forget that even. The standards are that the data is made available. The people responsible for making that happen, for enforcing the rules, are not enforcing them. In fact, as far as I can tell, they are active in helping to keep it repressed. If the standards are there, that’s no random thing. That’s deliberate. Why are the standards there? Because if you can’t get it, it’s not science.
Do people interested in their reputations, science itself and the furthering knowledge act this way? Does it surprise them when they don’t do what’s required, people question their motives in the first place, much les when they are breaking the rules by not doing it? And even the people they work for are unconcerned about it?
There is no excuse for this. This is data making up published papers, not
53. Don Keiller
Posted Nov 13, 2007 at 4:57 PM | Permalink
Re #41 Dave R:
But when the results of these non-archived studies are being used as a blunt instrument to terrify the general public and as an excuse for massive changes in socioeconomic policy, then the “mundane” ceases to apply.
54. Anna Lang
Posted Nov 13, 2007 at 5:10 PM | Permalink
RE: #6
It is probably unlikely Professor Thompson would argue these, “ice cores from the col situated between Mt. Bona and Mt. Churchill (61o 24′ N, 141o 42′ W; 4420 m asl) in Wrangell-St Elias Mountains of southeastern Alaska,” are from a middle latitude location. But, it never hurts to ask.
Undergraduate geography textbooks usually define the middle latitudes as 35-55 degrees N and S, while the higher latitudes pole ward are referred to as the Subarctic/Subantartic (55-66.5 degrees N/S) and the Arctic/Antarctic (66.5-90 degrees N/S).
However, in biogeographic terms the designation “high latitude” may vary somewhat. For example, Mary Belle Allen, “High-Latitude Phytoplankton,” Annual Review of Ecology and Systematics, Vol. 2, (1971), pp. 261-276, notes that relative to the marine environment, 50 degrees S likely qualifies, but in the northern hemisphere 60 degrees or higher (depending on location) would be the appropriate designation. Regarding terrestrial biomes, the IGBP High Latitude Transects (University of Alaska, Fairbanks), which include locations in Siberia, Canada, Scandinavia, and Alaska, cover latitudes ranging from 52-71 degrees N (the Alaska transects range 60-71 N) http://picea.sel.uaf.edu/projects/igbp.html
55. bender
Posted Nov 13, 2007 at 6:15 PM | Permalink
Scientists can not have it both ways. If you want to use your data to influence public policy, then your data must be archived and you forsake any proprietary rights. That’s democracy. That’s accountability. That’s responsible government. Don’t like it? Leave it.
56. Sam Urbinto
Posted Nov 13, 2007 at 7:03 PM | Permalink
No, bender, they can (and do) have it both ways! 🙂
Hopefully what we’re doing changes that.
57. Larry
Posted Nov 13, 2007 at 7:13 PM | Permalink
55, Hypothetically, what do you do in the case where the scientists don’t have any intent of influencing policy, but NGOs grab your work and use it to effect policy? Obviously, people like Mann and Hansen are out-of-the-closet activists, but I can see someone like Ababneh not wanting to dot all the i’s and cross all the t’s (per SOP for the field), and yet getting dragged into a kerfuffle by some group who grabs the research and uses it for policy purposes.
Maybe this is why the lawyers are getting involved.
58. bender
Posted Nov 13, 2007 at 8:04 PM | Permalink
Larry, those cases are usually so isolated and rarified as to be inconsequential. Individual cases of data abuse are unavoidable. It makes no sense to worry about them at a societal level. It’s an individual problem. It’s when the semi-science-based consensus-seeking policy juggernaut emerges that the solvable propietary/public IP problem crops up. No need splitting hairs. Design and enforce a disclosure policy that effectively regulates the juggernaut and all society will be better off.
59. Brooks Hurd
Posted Nov 14, 2007 at 12:47 AM | Permalink
I was interested to see Thompson’s slide of his class 100 clean room. Then I looked closely at the picture. Let me first put this into perspective. I have audited clean rooms professionally, consequently I have a reasonably good understanding of what items would cause problems in a clean room.
1. In that photo, you can see standard chairs. When you sit in a standard chair, air rushes out of the cushions. This air contains high evels of particles. There are special clean room chairs which do not spew out particles when someone sits on them.
2. Wood, standard paper, cardboard, and standard books are banned for clean rooms because they shed particles.
3. I have never seen a dot matrix printer in any clean room rated better than class 100,000. These printers are great particle generators.
This clean room may have been certified when it was built. I highly doubt that it would meet class 100 today with people working in it.
People use clean rooms to exclude particle contamination from sensitive products or from analytical samples. If a room is filled with particle generators, it is hardly a clean room.
I would be happy to audit Dr. Thompson’s clean room (for free), the next time that I am in Columbus.
60. MarkR
Posted Nov 14, 2007 at 7:31 AM | Permalink
SteveM
sudden and persistent shifts between modern (mixed) and zonal flow regimes of water vapour transport to the Pacific Northwest. The last such shift was in A.D. 1840.
Ties in with the Bristlecone Pine growth spurt you are looking at?
Also if changes in wind patterns can have this dramatic effect on Temp Proxies, how can the effects be corrected for if at all?
Steve: NOte the change in dO18 values in bristlecones reported by Berkelhamer and Stott.
61. DaveR
Posted Nov 14, 2007 at 7:54 AM | Permalink
#49 Neil, Sometimes data sets need work before they are good enough to be archived and used by others. If it’s been used in a publiation, obviously it should be archived. But this thread is about a data set that was recorded in 2002 and then not used, or at least not used to produce a publication – quite different from one that’s been used in publications but not archived afterwards.
#50 Trevor, Again, this thread is about a data set that was recorded and not used in publication. I’m just saying that in my experience there are many reasons why data sets can prove to be more tricky to work with than expected. There’s no particular need to start attributing sinister motives to Thompson.
#51 Mark, As far as I know, Thompson has produced no reports from this data set. It seems to have been used in a conference presentation, but that’s it.
#53 Don, I don’t think even Thompson can terrify anyone with a non-existant publication.
Steve implied that lack of publictions from this data set is because Thompson is hiding it because it didn’t show what he (Thompson) wanted it to show. All I’m saying is that there are many less dramatic reasons why a data set may not lead to publication and may not be in a fit state to archive.
62. Sam Urbinto
Posted Nov 14, 2007 at 11:18 AM | Permalink
DaveR, there may indeed be less dramatic reasons, in theory, to explain it. The sheer uncooperativeness of so many that are creating the papers and doing the peer-review implys otherwise, but it’s possible.
However, when people have nothing to hide, and no ulterior motives, and are dedicated to science, , etc etc etc they usually give a reason why they’re doing something. They say
“Yeah, this data’s not archival quality, but if you want it, here. Just keep in mind that it’s pretty much junk, and you probably won’t find anything useful to do with it.”
You do that publically, difusing any possible stink over the quality ahead of time, because you have already set expectations and said it’s not very good. What can anybody say?
If you go to Real Climate, I forget which thread, it’s a long one, I seem to remember Gavin changing the reason they wouldn’t release the code about 5-10 times. Most were the lamest excuses I’ve seen in a while.
When they eventually released it, it became clear why; even with the many skillful programmers here; it doesn’t even look like it runs.
I think any doubts that exist here are warrented based upon past and current behavior.
63. henry
Posted Nov 14, 2007 at 1:11 PM | Permalink
I went to the OSU website (Thomson’s university), and found that another grad student was supposed to be using that same data for his thesis.
“In November 2003 David Urmann, a graduate student in the Department of Geological Sciences, will complete his Masters thesis entitled: An Evaluation and comparison of ice core data from Bona-Churchill and Quelccaya, and lake level data for the Western United States, Alaska and Peru as proxies of El Niño events during the last one hundred years.
David Urmann
Ph.D. Candidate
Dissertation topic: “A 1000-year record of ENSO and its response to climate”
M.S., Geological Sciences, The Ohio State University, 2004
Thesis Title: “ENSO and PDO variability in ice core and lake level records over the past century.”
B. S. Geology, Utah State University, 1997
email: urmann.1@osu.edu
64. Jemster
Posted Nov 14, 2007 at 1:22 PM | Permalink
Re. publically (sic): I don’t make the rules!
65. henry
Posted Nov 14, 2007 at 1:27 PM | Permalink
For a minute there, I thought we had something…
http://davidurmann.com/masters.htm
66. Dave Dardinger
Posted Nov 14, 2007 at 1:46 PM | Permalink
re: #65,
If you go to the Icecore Group you’ll see that he’s listed as a PhD. candidate and it gives his e-mail address if anyone is interested. Also of note is the picture of the group with Al Gore from back in 1994 before Mr. Urmann became a member of the group. I hope it doesn’t surprise anyone that Lonnie Thompson would display prominently a picture of them with Gore.
67. henry
Posted Nov 14, 2007 at 2:47 PM | Permalink
But I’m afraid that if we e-mail him, and ask him about the BC ice core data, we’ll get the same reply as we got from Linah Ababneh: requests are improper, and mention his getting advice from a lawyer.
The area listed as “data locations” on his thesis site (http://davidurmann.com/masters.htm) is showing as file not found.
Do we have any OSU alum that can visit the library there and see if his thesis is posted?
P.S. Doesnt surprise anyone about connection to Mr. Gore: Thompson and wife were advisors to AIT.
68. Posted Nov 14, 2007 at 8:08 PM | Permalink
Hole punch on desk 25 dollars.
Used grocery bag under the desk functioning as a trash receptacle…priceless.
69. Posted Apr 30, 2009 at 8:45 AM | Permalink
RE henry, #67 —
Last year, I checked Urmann’s master’s thesis, with its discussion of Bona Churchill, out of the OSU Geology Library and sent Steve a photocopy.
We haven’t heard from Steve about it, so I gather it wasn’t too informative for present purposes. I didn’t study it closely, but I didn’t see a table or even full-length graph of Bona Churchill d18O.
Nevertheless, Thompson has the data and NSF should insist he at least archive it. Perhaps Ellen Mosley-Thompson will write about it in her NAS inaugural article in PNAS??
70. Posted Sep 12, 2009 at 12:22 AM | Permalink
RAB note: I do not doubt there is global climate change, but I do have a problem with the way the media and some “scientists” are justifying the cause to be in Man-made green house gases.
As a geological scientist in a different profession (minerals exploration) and having +40 years of field experience, I offer the following provocative issues:
1. The geological record for the rise and fall of sea level, as evidenced by beach strands in Nome, AK and elsewhere – this is a huge indication of dramatic climate changes where sea level has risen and fell by +200 meters….not a few inches or a few feet…. (I consider this effect to be related to a real “climate change”;
2. Incised channels of 20 to 50 meters deep in solid bedrock, well below current sea level (the top of which is 30 to 50 meters below sea level) in places like Susitna River and Lewis River, AK (also Ruby Creek and Limestone Creek in central Alaska);
3. Incision of these channels also portends a huge water flow from the hinter lands (greater than Niagara Falls) must be related to past very rapid temperature rise from a “global warming” – at a time well before Man made metal implements;
4. The retreat of Gulkana Glacier, AK in the period 1910 (Mendenhal)-1955 (Rose), and then in 1970 to 1972 (Richter, and by Blakestad);
5. Occurrence of 100 to +300 feet of “muck” (loess) in the age of wooly mammoths in central and northern Alaska – loess is a glacially-derived wind-blown sediment, suggesting a huge glacial melting event(s) during that era;
6. I do not see the data for NH4 (methane) anywhere in the discussion – but there are huge deposits of methane on the ocean floor – sea water interface; these must have been part of the ocean-atmosphere equilibrium set, that existed for perhaps tens of thousands – to hundreds of thousands of years; and
7. Arctic sea ice is melting… OK, what data is there that it has not completely disappeared in recent geological times (before Man made metal implements) a dozen of more times, or over longer geological times?
Posted Aug 23, 2010 at 10:33 PM | Permalink
Re Thompson Bona Churchill core: Graduate student Urmann’s dissertation has now been posted at his personal website “in partial fulfillment of the requirements” with a more fulsome discussion on Bona Churchill including a complete time series of BC1. Data not linked as far as I could tell.
|
2017-12-11 17:03:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39497193694114685, "perplexity": 2744.4512560342996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513784.2/warc/CC-MAIN-20171211164220-20171211184220-00539.warc.gz"}
|
https://notstatschat.rbind.io/2018/06/05/new-blog-home/
|
I’m not actually worried by that: one of the key features of a git repository is that it doesn’t have the only copy of any of your stuff. The main motivation for switching was to use blogdown rather than Tumblr, because my blog is mostly text.
|
2019-05-22 15:47:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34207963943481445, "perplexity": 668.5837836611801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256858.44/warc/CC-MAIN-20190522143218-20190522165218-00189.warc.gz"}
|
https://www.physicsforums.com/threads/geometric-progressions-help.23048/
|
Geometric Progressions Help
1. Apr 28, 2004
Olly
I am having toruble with my geometric progressions, in that i ahv ebeen given a question where i am given the 7th and 26th terms of a GP. I am required to find the ratio however, which i could do if i had the first term. Usually i can do this as they only give me gps that are one term apart, and i would divide the top by bottom (say Term6 = 3 and term7 = 4) and woudl end up with term1 = 3/4. How can i do this if the terms are as far apart as they are?
Welcoming any responses here
2. Apr 28, 2004
MathematicalPhysicist
i think you have too many variables such as a1 and n (the number of terms) that are unknown at least one of them are needed to solve for the quotinent.
3. Apr 28, 2004
arildno
You know that in a geometric progression, the next term's ratio with the previous is a constant; let's call it x; that is GP(n+1)/GP(n)=x.
But then we must have: GP(n+2)/GP(n)=(GP(n+2)/GP(n+1))*GP(n+1)/GP(n)=x^(2).
Did that help?
4. Apr 29, 2004
Olly
Thanks for the help, ive got it down pat now :) hope im ready for maths test tomorrow
|
2018-12-13 20:06:57
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.808979868888855, "perplexity": 776.2772156125081}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825098.68/warc/CC-MAIN-20181213193633-20181213215133-00048.warc.gz"}
|
https://terrytao.wordpress.com/2008/03/28/285g-lecture-1-ricci-flow/
|
In the first lecture, we introduce flows $t \mapsto (M(t), g(t))$ on Riemannian manifolds $(M,g)$, which are recipes for describing smooth deformations of such manifolds over time, and derive the basic first variation formulae for how various structures on such manifolds (e.g. curvature, length, volume) change by such flows. (One can view these formulae as describing the relationship between two “infinitesimally close” Riemannian manifolds.) We then specialise to the case of Ricci flow (together with some close relatives of this flow, such as renormalised Ricci flow, or Ricci flow composed with a diffeomorphism flow). We also discuss the “de Turck trick” that modifies the Ricci flow into a nonlinear parabolic equation, for the purposes of establishing local existence and uniqueness of that flow.
— Flows on Riemannian manifolds —
For the purposes of this course, we are not interested in just a single Riemannian manifold $(M, g)$, but rather a one-parameter family of such manifolds $t \mapsto (M(t), g(t))$, parameterised by a “time” parameter t. The manifold $(M(t), g(t))$ at time t is going to determine the manifold $(M(t+dt), g(t+dt))$ at an infinitesimal time t + dt into the future, according to some prescribed evolution equation (e.g. Ricci flow). In order to do this rigorously, we will need to “differentiate” a manifold flow $t \mapsto (M(t), g(t))$ with respect to time.
There are at least two ways to do this. The simplest is to restrict to the case in which the underlying manifold $M = M(t)$ is fixed (as a smooth manifold), so that only the metric $g = g(t)$ varies in time. As g takes values as sections in a vector bundle, there is then no difficulty in defining time derivatives $\dot g(t) = \frac{d}{dt} g(t)$ in the usual manner:
$\frac{d}{dt} g(t) := \lim_{dt \to 0} \frac{g(t+dt) - g(t)}{dt}$. (1)
We can of course similarly define the time derivative of any other tensor field by the same formula.
The one drawback of the above simple approach is that it forces the topology of the underlying manifold M to stay constant. A more general approach is to view each d-dimensional manifold M(t) as a slice of a d+1-dimensional “spacetime” manifold ${\bf M}$ (possibly with boundary or singularities). This spacetime is (usually) equipped with a time coordinate $t: {\bf M} \to {\Bbb R}$, as well as a time vector field $\partial_t \in \Gamma(T {\bf M})$ which obeys the transversality condition $\partial_t t = 1$. The level sets of the time coordinate t then determine the sets M(t), which (assuming non-degeneracy of t) are smooth d-dimensional manifolds which collectively have a tangent bundle $\hbox{ker}(dt) \subset T{\bf M}$ which is a d-dimensional subbundle of the d+1-dimensional tangent bundle $T{\bf M}$ of ${\bf M}$. The metrics g(t) can then be viewed collectively as a section ${\bf g}$ of $(\hbox{ker}(dt)^*)^{\otimes 2}$. The analogue of the time derivative $\frac{d}{dt} g(t)$ is then the Lie derivative ${\mathcal L}_{\partial_t} {\bf g}$. One can then define other Riemannian structures (e.g. Levi-Civita connections, curvatures, etc.) and differentiate those in a similar manner.
The former approach is of course a special case of the latter, in which ${\bf M} = M \times I$ for some time interval $I \subset {\Bbb R}$ with the obvious time coordinate and time vector field. The advantage of the latter approach is that it can be extended (with some technicalities) into situations in which the topology changes (though this may cause the time coordinate to become degenerate at some point, thus forcing the time vector field to develop a singularity). This leads to concepts such as generalised Ricci flow, which we will not discuss here, though it is an important part of the definition of Ricci flow with surgery (see Chapters 3.8 and 14 of Morgan-Tian’s book for details). Instead, we focus exclusively for now on the former viewpoint, in which $M = M(t)$ does not depend on time.
Suppose we have a smooth flow $(M,g(t))$ of metrics on a fixed background manifold M. The rate of change of the metric $g_{\alpha \beta}(t)$ is given by $\dot g_{\alpha \beta}(t)$. By the chain rule, this implies that any other expression that depends on this metric, such as the curvatures $\hbox{Riem}_{\alpha \beta \gamma}^\delta(t)$, $\hbox{Ric}_{\alpha \beta}(t)$, $R(t)$, should have a rate of change that depends linearly on $\dot g_{\alpha \beta}(t)$. We now compute exactly what these rates of change are. In principle, this can be done by writing everything explicitly using local coordinates and applying the chain rule, but we will try to keep things as coordinate-free as possible as it seems to cut down the computation slightly.
To abbreviate notation, we shall omit the explicit time dependence in what follows, e.g. abbreviating g(t) to just g. We shall call a tensor field w time-independent or static if it does not depend on t, or equivalently that $\dot w = 0$.
From differentiating the identity
$g^{\alpha \beta} g_{\beta \gamma} = \delta^\alpha_\gamma$ (2)
we obtain the variation formula
$\frac{d}{dt} g^{\alpha\beta} = - g^{\alpha \gamma} g^{\beta \delta} \dot g_{\gamma \delta}$ (3)
(here is a place where the raising and lowering conventions can be confusing if applied blindly!).
Next, we compute how covariant differentiation deforms with respect to time. For a scalar function f, the derivative $\nabla_\alpha f \equiv df$ does not involve the metric, and so the rate of change formula is simple:
$\frac{d}{dt} \nabla_\alpha f = \nabla_\alpha \dot f$. (4)
In particular, if f is static, then so is $\nabla_\alpha f$.
Now we take a static vector field $X^\beta$. From (4) and the product rule we see that the expression $\frac{d}{dt} \nabla_\alpha X^\beta$ is linear over $C^\infty(M)$ (interpreted as the space of static scalar fields). Thus we must have
$\frac{d}{dt} \nabla_\alpha X^\beta = \dot \Gamma_{\alpha \gamma}^\beta X^\gamma$ (5)
for some rank (1,2) tensor $\dot \Gamma_{\alpha \gamma}^\beta$. From the Leibnitz rule and (4) we can obtain similar formulae for other tensors, e.g.
$\frac{d}{dt} \nabla_\alpha \omega_\beta = - \dot \Gamma_{\alpha \beta}^\gamma \omega_\gamma$ (5′)
for any static one-form $\omega_\beta$.
What is $\dot \Gamma_{\alpha \beta}^\gamma$? Well, we can work it out from the properties of the Levi-Civita connection. Differentiating the torsion-free identity
$\nabla_\alpha \nabla_\beta f = \nabla_\beta \nabla_\alpha f$ (6)
for static scalar fields f using (4), (5′), we conclude the symmetry $\dot \Gamma_{\alpha \beta}^\gamma = \dot \Gamma_{\beta \alpha}^\gamma$. Similarly, differentiating the respect-of-metric identity $\nabla_\alpha g_{\beta \gamma} = 0$ we conclude that
$- \dot \Gamma_{\alpha \beta}^\delta g_{\delta \gamma} - \dot \Gamma_{\alpha \gamma}^\delta g_{\beta \delta} + \nabla_\alpha \dot g_{\beta \gamma} = 0$. (7)
These two facts allow us to solve for $\dot \Gamma_{\alpha \beta}^\gamma$:
$\dot \Gamma_{\alpha \beta}^\gamma = \frac{1}{2} g^{\gamma \delta} ( \nabla_\alpha \dot g_{\beta \delta} + \nabla_\beta \dot g_{\alpha \delta} - \nabla_\delta\dot g_{\alpha \beta} )$ (8)
(compare with the usual formula for the Christoffel symbols in local coordinates).
Now we turn to curvature tensors. We have the identity
$\nabla_\alpha \nabla_\beta X^\gamma - \nabla_\beta \nabla_\alpha X^\gamma = \hbox{Riem}_{\alpha \beta \delta}^\gamma X^\delta$ (9)
for any static vector field X. Taking the time derivative of this using (5), (5′), etc. we obtain
$-\dot \Gamma_{\alpha \beta}^{\delta} \nabla_\delta X^\gamma + \dot \Gamma_{\alpha \delta}^\gamma \nabla_\beta X^\delta + \nabla_\alpha \dot \Gamma_{\beta \delta}^\gamma X^\delta$
$+\dot \Gamma_{\beta \alpha}^\delta \nabla_\delta X^\gamma - \dot \Gamma_{\beta \delta}^\gamma \nabla_\alpha X^\delta - \nabla_\beta \dot \Gamma_{\alpha \delta}^\gamma X^\delta$
$= \dot{\hbox{Riem}}_{\alpha \beta \delta}^\gamma X^\delta$ (10)
which eventually simplifies to
$\dot{\hbox{Riem}}_{\alpha \beta \delta}^\gamma = \nabla_\alpha \dot \Gamma_{\beta \delta}^\gamma - \nabla_\beta \dot \Gamma_{\alpha \delta}^\gamma$. (11)
(one can view this as a linearisation of the usual formula for the Riemann curvature tensor in terms of Christoffel symbols). Combining (8) and (11), and using the fact that the Levi-Civita connection respects the metric, we thus have
$\dot{\hbox{Riem}}_{\alpha \beta \delta}^\gamma = \frac{1}{2} g^{\gamma \sigma} ( \nabla_\alpha \nabla_\delta \dot g_{\beta \sigma} - \nabla_\alpha \nabla_\sigma \dot g_{\delta \beta} - \nabla_\beta \nabla_\delta \dot g_{\alpha \sigma} + \nabla_\beta \nabla_\sigma \dot g_{\delta \alpha}$
$- \hbox{Riem}_{\alpha \beta \delta}^\mu \dot g_{\mu \sigma} - \hbox{Riem}_{\alpha \beta \sigma}^\mu \dot g_{\delta \mu}).$ (12)
Exercise 1. Show that (12) is consistent with the antisymmetry properties of the Riemann tensor, and with the Bianchi identities, as presented in the previous lecture. $\diamond$
Taking traces, we obtain a variation formula for the Ricci tensor
$\dot{\hbox{Ric}}_{\alpha \beta} = - \frac{1}{2}\Delta_L \dot g_{\alpha \beta} - \frac{1}{2} \nabla_\alpha \nabla_\beta \hbox{tr}(\dot g) +\frac{1}{2} \nabla_\alpha \nabla^\gamma \dot g_{\beta \gamma} + \frac{1}{2} \nabla_\beta \nabla^\gamma \dot g_{\alpha \gamma}$ (13)
where $\hbox{tr}(\pi) := g^{\alpha \beta} \pi_{\alpha \beta}$ is the trace, and the Lichnerowicz Laplacian (or Hodge-de Rham Laplacian) $\Delta_L$ on symmetric rank (0,2) tensors $\pi_{\alpha \beta}$ is defined by the formula
$\Delta_L\pi_{\alpha \beta} := \Delta \pi_{\alpha \beta} + 2 g^{\sigma \gamma} \hbox{Riem}_{\sigma \alpha \beta}^{\delta} \pi_{\gamma \delta} - \hbox{Ric}_\alpha^\gamma \pi_{\gamma\beta} - \hbox{Ric}_\beta^\gamma \pi_{\gamma \alpha}$ (14)
and $\Delta \pi_{\alpha \beta} = \nabla_\gamma \nabla^\gamma \pi_{\alpha \beta}$ is the usual connection Laplacian. Taking traces once again, one obtains a variation formula for the scalar curvature:
$\dot R = - \hbox{Ric}^{\alpha \beta} \dot g_{\alpha \beta} - \Delta \hbox{tr}(\dot g) + \nabla^\alpha \nabla^\beta \dot g_{\alpha \beta}$. (15)
Exercise 2. Verify the derivation of (13) and (15). [Aside: – I wonder if there are more direct derivations of (13) and (15) that do not require one to go through so many computations. One can use (22) and (26) below as consistency checks for these formulae, but this does not quite seem sufficient.] $\diamond$
We will also need to understand how deformation of the metric affects two other quantities, length and volume. The length $L(\gamma)$ of a curve $\gamma: [a,b] \to M$ in a Riemannian manifold $(M,g)$ is given by the formula
$L(\gamma) := \int_a^b g( \gamma'(u), \gamma'(u) )^{1/2}\ du =: \int_\gamma ds$. (16)
where $ds = \gamma_* g( \gamma'(u), \gamma'(u) )^{1/2}\ du$ is the measure on the curve $\gamma$ induced by the metric.
Exercise 3. If $\gamma$ varies smoothly in time (but with static endpoints $\gamma(a), \gamma(b)$, show that
$\frac{d}{dt} L(\gamma) = \frac{1}{2} \int_\gamma \dot g( S, S )\ ds - \int_\gamma g( \nabla_S S, V )\ ds$ (17)
where at every point $x = \gamma(u)$ of the curve, $S = \gamma'(u) /g( \gamma'(u), \gamma'(u) )^{1/2}$ is the unit tangent, and $V = \dot \gamma(u)$ is the variation field. (Strictly speaking, one needs to work on the pullback tensor bundles on ${}[a,b]$ rather than M in order to make the formulae in (17) well defined.) $\diamond$
The distance between two points x, y on a manifold is defined as $d(x,y) := \inf L(\gamma)$, where $\gamma$ ranges over all curves from x to y. For smooth connected manifolds, it is not hard to show (e.g. by using a reduction to the unit speed case, followed by a minimising sequence argument and the Arzelà-Ascoli theorem, combined with some local theory of short geodesics to ensure $C^1$ regularity of the limiting curve) that this infimum is actually attained for some minimising geodesic $\gamma$, which is then a critical point for $L(\gamma)$. (However, this infimum need not be unique if x, y are far apart.) From (17) we conclude that such geodesics must obey the equation $\nabla_S S = 0$ (thus the unit tangent vector parallel transports itself). We also conclude that
$\frac{d}{dt} d(x,y) = \inf \frac{1}{2} \int_\gamma \dot g( S, S )\ ds$ (18)
where the infimum is over all the minimising geodesics from x to y. Thus, a positive $\dot g$ (in the sense of quadratic forms) will increase distances between two marked points, while a negative $\dot g$ will decrease it.
Next, we look at the evolution of the volume measure $d\mu = d\mu(t)$. This measure is defined using any frame $(e_a)_{1 \leq a \leq d}$ and dual frame $(e^a)_{1 \leq a \leq d}$ as
$d\mu := \sqrt{\det g} |e^1 \wedge \ldots \wedge e^d|$ (18′)
where $\det g$ is the determinant of the matrix with components $g_{ab} = g( e_a, e_b )$ (one can check that this measure is defined independently of the choice of frame). Intuitively, this measure is the unique measure such that an infinitesimal cube whose sides are orthogonal vectors of infinitesimal length $r$, will have volume $r^d + O( r^{d+1} )$. It is not hard to show (using coordinates, and the variation formula $\frac{d}{dt} \det(A) = \hbox{tr}(A^{-1} \dot A) \det(A)$ for the determinant) that one has
$\frac{d}{dt} d\mu = \frac{1}{2} \hbox{tr}(\dot g)\ d\mu$. (19)
Thus, a positive trace for $\dot g$ implies volume expansion, and a negative trace implies volume contraction. This is broadly consistent with how length is affected by metric distortion, as discussed previously.
— Dilations —
Now we specialise to some specific flows $(M, g(t))$ of a Riemannian metric on a fixed background manifold M. The simplest such flow (besides the trivial flow $g(t)=g(0)$, of course) is that of a dilation
$g(t) := A(t) g(0)$ (20)
where $A(t) > 0$ is a positive scalar with $A(0)=1$. The flow here is given by
$\dot g(t) = a(t) g(t)$ (21)
where $a(t) := \frac{\dot A(t)}{A(t)} = \frac{d}{dt} \log A(t)$ is the logarithmic derivative of A (or equivalently, $A(t) = \exp(\int_0^t a(t')\ dt')$). In this case our variation formulas become very simple:
$\frac{d}{dt} g^{\alpha \beta} = - a g^{\alpha \beta}$
$\dot \Gamma_{\alpha \beta}^\gamma = 0$
$\dot{\hbox{Riem}}_{\alpha \beta \gamma}^\delta = 0$
$\dot{\hbox{Ric}}_{\alpha \beta} = 0$ (22)
$\dot R = - a R$
$\frac{d}{dt} d(x,y) = \frac{1}{2} a d(x,y)$
$\frac{d}{dt} d\mu = \frac{d}{2} a\ d\mu$;
note that these formulae are consistent with (20) and the scaling heuristics at the end of the previous lecture. In particular, a positive value of $a$ means that length and volume are increasing, and a negative value means that length and volume are decreasing.
— Diffeomorphisms —
Another basic flow comes from smoothly varying one-parameter families of diffeomorphisms $\phi(t): M \to M$ with $\phi(0)$ equal to the identity. This induces a flow
$g(t) := \phi(t)^* g(0)$ (23).
Infinitesimally, this flow is given by the Lie derivative
$\dot g(t) = {\mathcal L}_{X(t)} g(t)$ (24)
where $X(t) := \phi_*(t) \dot \phi(t)$ is the vector field representing the infinitesimal diffeomorphism at time t. (One can use Picard’s existence theorem to recover $\phi$ from X, though one has to solve an ODE for this and so the formula is not fully explicit.) The quantity $\pi_{\alpha \beta} := {\mathcal L}_X g_{\alpha \beta}$ is known as the deformation tensor of X, and it is a short exercise to verify the identity
$\pi_{\alpha \beta} = \nabla_\alpha X_\beta + \nabla_\beta X_\alpha$. (25)
(Informally, this tensor measures the obstruction to K being a Killing vector field.)
It is clear from diffeomorphism invariance that all tensors deform via the Lie derivative:
$\frac{d}{dt} g^{\alpha \beta} = {\mathcal L}_X g^{\alpha \beta}$
$\dot{\hbox{Riem}}_{\alpha \beta \gamma}^\delta = {\mathcal L}_X \hbox{Riem}_{\alpha \beta \gamma}^\delta$ (26)
$\dot{\hbox{Ric}}_{\alpha \beta} = {\mathcal L}_X \hbox{Ric}_{\alpha \beta}$
$\dot R = {\mathcal L}_X R$.
[The formula for $\dot \Gamma_{\alpha \beta}^\gamma$ does not have such a nice representation, since $\Gamma_{\alpha \beta}^\gamma$ is not a tensor.]
Exercise 4. Establish the first variation formula $\frac{d}{dt} d(x,y) = \inf g( X(y), S(y) ) - g( X(x), S(x) )$, where the infimum ranges over all minimal geodesics from x to y (which in particular determine the unit tangent vector S at x and at y). $\diamond$
Remark 1. As observed by Kazdan, one can compare the identities (26) with the variation formulae (11), (13), (15) to provide an alternate derivation of the Bianchi identities. $\diamond$
Applying (19), (25) we see that variation of the volume measure $d\mu$ is given by
$\frac{d}{dt} d\mu = \hbox{div}(X)\ d\mu$ (27)
where $\hbox{div}(X) := \nabla_\alpha X^\alpha$ is the divergence of X. On the other hand, for compact manifolds M at least, diffeomorphisms preserve the total volume $\hbox{vol}(M) := \int_M\ d\mu$. We thus conclude Stokes’ theorem
$\int_M \hbox{div}(X)\ d\mu = 0$ (28)
on compact manifolds for arbitrary smooth vector fields X. It is not difficult to extend this to non-compact manifolds in the case when X is compactly supported. From (28) and the product rule we also obtain the integration by parts formula
$\int_M f \nabla_\alpha X^\alpha\ d\mu = - \int_M (\nabla_\alpha f) X^\alpha\ d\mu$. (29)
As one particular special case of (29), we observe that the Laplacian on $C^\infty(M)$ is formally self-adjoint.
— Ricci flow —
Finally, we come to the main focus of this entire course, namely Ricci flow. A one-parameter family of metrics g(t) on a smooth manifold M for all time t in an interval I is said to obey Ricci flow if we have
$\frac{d}{dt} g(t) = -2 \hbox{Ric}(t)$. (30)
Note that this equation makes tensorial sense since g and Ric are both symmetric rank 2 tensors. The factor of 2 here is just a notational convenience and is not terribly important, but the minus sign – is crucial (at least, if one wants to solve Ricci flow forwards in time). Note that Ricci flow, like all other parabolic flows (of which the heat equation is the model example), is not time-reversible – solvability forwards in time does not imply solvability backwards in time!
In the preceding examples of dilation flow and diffeomorphism flow, it was easy to get from the infinitesimal evolution to the global evolution, either by using an integrating factor or by solving some ODEs. The situation for Ricci flow turns out to be significantly less trivial (and indeed, resolving the global existence problem properly is a large part of the proof of the Poincaré conjecture). Nevertheless, we do have the following relatively easy result:
Theorem 1 (local existence). If M is compact and $g(0)$ is a smooth Riemannian metric on M, then there exists a time $T>0$, and a unique Ricci flow $t \mapsto g(t)$ with initial metric g(0) on the time interval $t \in [0,T)$.
This theorem was first proven by Hamilton using the Nash-Moser iteration method, and then a simplified proof given by de Turck. We will not prove Theorem 1 here, but we will shortly indicate the main trick of de Turck used to reduce the problem to a standard local existence problem for nonlinear parabolic PDE.
Solutions have various names depending on their interval I of existence (or lifespan):
1. A solution is ancient if I has $-\infty$ as a left endpoint.
2. A solution is immortal if I has $+\infty$ as a right-endpoint.
3. A solution is global if it is both ancient and immortal, thus $I = {\Bbb R}$.
The ancient solutions will play a particularly important role in our analysis later in this course, when we rescale (or blow up) the time variable (and the metric) as we approach a singularity of the Ricci flow, and then look at the asymptotic limiting profile of these rescaled solutions.
It is a routine matter to compute the variations of various tensors under the Ricci flow:
$\frac{d}{dt} g^{\alpha \beta} = 2 \hbox{Ric}^{\alpha \beta}$
$\dot R = \Delta R + 2 |\hbox{Ric}|^2$
$\dot{\hbox{Ric}}_{\alpha \beta} = \Delta_L \hbox{Ric}_{\alpha \beta}$ (31)
$= \Delta \hbox{Ric}_{\alpha \beta} + 2 \hbox{Ric}^\gamma_{\delta} \hbox{Riem}_{\alpha \gamma \beta}^\delta - 2 \hbox{Ric}_{\alpha \gamma} \hbox{Ric}^{\gamma}_\beta$
$\dot{\hbox{Riem}} = \Delta \hbox{Riem} + {\mathcal O}(g^{-1} \hbox{Riem}^2)$
where ${\mathcal O}(g^{-1} \hbox{Riem}^2)$ is a moderately complicated combination of the tensors $g^{-1}$, Riem, and Riem that I will not write down explicitly here. In particular, we see that all of the curvature tensors obey some sort of tensor nonlinear heat equation. Parabolic theory then suggests that these tensors will behave for short times much like solutions to the linear heat equation (for instance, they should become smoother over time, and they should obey various maximum principles). We will see various manifestations of this principle later in this course.
We also have variation formulae for length and volume:
$\frac{d}{dt} d(x,y) = - \sup \int_\gamma \hbox{Ric}( S, S )\ ds$ (32)
$\frac{d}{dt} d\mu = -R\ d\mu$. (33)
Thus Ricci flow tends to enlarge length and volume in regions of negative curvature, and reduce length and volume in regions of positive curvature.
— Modifying Ricci flow —
Ricci flow (30) combines well with the dilation flows (21) and diffeomorphism flows (24), thanks to the dilation symmetry and diffeomorphism invariance of Ricci flow. (It can even be combined with these two flows simultaneously, although we will not need such a unified flow here.)
For instance, if g(t) solves Ricci flow and we set $\tilde g(s) := A(s) g(t(s))$ for some reparameterised time $s = s(t)$ and some scalar $A = A(s) > 0$, then the Ricci curvature here is $\tilde{\hbox{Ric}}(s) = \hbox{Ric}(t(s))$. We then see from the chain rule that $\tilde g$ obeys the equation
$\frac{d}{ds} \tilde g(s) = - 2 A(s) \frac{dt}{ds} \tilde{\hbox{Ric}}(s) + a(s) \tilde g(s)$ (34)
where $a$ is the logarithmic derivative of A. If we normalise the time reparameterisation by requiring $\frac{dt}{ds} = 1/A(s)$, we thus see that $\tilde g$ obeys normalised Ricci flow
$\frac{d}{ds} \tilde g = - 2 \tilde{\hbox{Ric}} + a \tilde g(s)$ (35)
which can be viewed as a combination of (30) and (21). Conversely, it is not difficult to reverse these steps and transform a solution to (35) for some a into a solution of Ricci flow by reparameterising time and renormalising the metric by a scalar. Normalised Ricci flow is useful for studying singularities, as it can “blow up” the interesting portion of the dynamics to keep it at unit scale, instead of cascading to finer and finer scales as is usual when approaching a singularity. The parameter a is at one’s disposal to set; for instance, one could choose a to normalise the volume of M to be constant, or perhaps to normalise the maximum scalar curvature $\|R\|_\infty$ to be constant. (Of course, only one quantity at a time can be normalised to be constant, since one only has one free parameter to set.)
Setting a=0, we observe in particular that the solution space to Ricci flow enjoys the scaling symmetry
$g(t) \mapsto \lambda g(\frac{t}{\lambda})$ (36)
for any $\lambda > 0$. Thus, if we enlarge a manifold M by $\sqrt{\lambda}$ (or equivalently, if we fix M but make the metric g $\lambda$ times as large), then Ricci flow will become slower by a factor of $\lambda$, and conversely if we shrink a manifold by $\sqrt{\lambda}$ then Ricci flow speeds up by $\lambda$. Thus, as a first approximation, big manifolds tend to evolve slowly under Ricci flow, and small ones tend to evolve quickly.
Similarly, Ricci flow combines well with diffeomorphisms. If g(t) solves Ricci flow and $\phi(t): M \to M$ is a smoothly varying family of diffeomorphisms, then we can define a modified Ricci flow $\tilde g(t) := \phi(t)^* g(t)$ (cf. (23)). As Ricci curvature is intrinsic, this new metric has curvature $\tilde{\hbox{Ric}}(t) = \phi(t)^* \hbox{Ric}(t)$. It is then not hard to see that $\tilde g$ evolves by the flow
$\frac{d}{dt} \tilde g = - 2 \tilde{\hbox{Ric}} + {\mathcal L}_X \tilde g$ (36′)
where $X(t) := \phi^*(t) \dot \phi(t)$ are the vector fields that direct the flow $\phi$ as before. Note that (36′) is a combination of (30) and (24). Conversely, given a solution to a modified Ricci flow (36′) for some smoothly time-varying vector field X, one can convert it back to a Ricci flow by solving for the diffeomorphisms $\phi$ and then using them as a change of variables.
The modified flows (36′) (with various choices of vector field X) arise in a number of contexts. For instance, they are useful for studying gradient Ricci solitons, which will be an important special solution to Ricci flow that we will encounter later. Also, modified Ricci flow is an excellent tool for assisting the proof of local existence (Theorem 1), because it can be used (via the “de Turck trick”) to “gauge away” some nasty non-parabolic components in Ricci flow, leaving behind a nicely parabolic non-linear PDE known as Ricci-de Turck flow which is straightforward to solve.
To explain this, let us first write the Ricci flow equation (30) “in coordinates” in order to attempt to solve it as a nonlinear PDE. (The current state of the art of PDE existence theory does not cope all that well with the coordinate-independent frameworks which are embraced by differential geometers; in order to demonstrate existence of just about any equation, one usually has to break the covariance of the situation, and pick some coordinate system to work with. On the other hand, for particularly geometric equations, such as Ricci flow, there are often some special coordinate systems that one can pick that will simplify the PDE analysis enormously.)
The traditional way to express Ricci flow in coordinates is, of course, to use local coordinate charts, but let us present a slightly different way to do this, relying on an arbitrarily chosen background metric $\overline{g}$ on M which does not depend on time. (For instance, one could pick $\overline{g} = g(0)$ to be the initial metric, although we do not need to do so.) This gives us a background connection $\overline{\nabla}$, background curvature tensors $\overline{\hbox{Riem}}, \overline{\hbox{Ric}}, \overline{R}$, and so forth. One can then express the evolving metric in terms of the background by a variety of formulae. For instance, the evolving connection $\nabla$ can be expressed in terms of the background connection $\overline{\nabla}$ by the formula
$\nabla_\alpha X^\beta = \overline{\nabla}_\alpha X^\beta + \Gamma_{\alpha \gamma}^\beta X^\gamma$ (37)
where the Christoffel symbol $\Gamma_{\alpha \gamma}^\beta$ is given by
$\Gamma_{\alpha \gamma}^\beta = \frac{1}{2} g^{\beta \delta} ( \overline{\nabla}_\alpha g_{\gamma \delta} + \overline{\nabla}_\gamma g_{\alpha \delta} - \overline{\nabla}_\delta g_{\alpha \gamma} )$. (38)
Exercise 5. Verify (37) and (38). Then use these formulae to give an alternate derivation of (5) and (8). $\diamond$
From (37) and the definition of Riemann curvature one concludes that
$\hbox{Riem}_{\alpha \beta \gamma}^\delta = \overline{\hbox{Riem}}_{\alpha \beta \gamma}^\delta + \overline{\nabla}_\alpha \Gamma_{\beta \gamma}^\delta - \overline{\nabla}_\beta \Gamma_{\alpha \gamma}^\delta$
$+ \Gamma_{\alpha\mu}^\delta \Gamma_{\beta \gamma}^\mu - \Gamma_{\beta\mu}^\delta \Gamma_{\alpha \gamma}^\mu$. (39)
Contracting this, we conclude
$\hbox{Ric}_{\alpha \beta} = \overline{\hbox{Ric}}_{\alpha \beta} + \overline{\nabla}_\delta \Gamma_{\alpha \beta}^\delta - \overline{\nabla}_\alpha \Gamma_{\delta \beta}^\delta + {\mathcal O}(\Gamma^2)$. (40)
Inserting (38) and only keeping careful track of the top order terms, we can eventually rewrite (40) as
$\overline{\hbox{Ric}}_{\alpha \beta} - \frac{1}{2} g^{\gamma \delta} \overline{\nabla}_\gamma \overline{\nabla}_\delta g_{\alpha \beta} + \frac{1}{2} {\mathcal L}_X g_{\alpha \beta} + {\mathcal O}( g^{-2} \overline{\nabla} g \overline{\nabla} g )$ (41)
where X is the vector field
$X^\alpha := g^{\beta\gamma} \Gamma_{\beta \gamma}^\alpha$. (42)
Exercise 6. Show that the expression (41) for the Ricci curvature can be used to imply (13). Conversely, use (13) to recover (41) without performing an excessive amount of explicit computation. (Hint: first show that the Ricci tensor can be crudely expressed as $\overline{\hbox{Ric}} + {\mathcal O}(g^{-1} \overline{\nabla}^2 g ) + {\mathcal O}(g^{-2} \overline{\nabla} g \overline{\nabla} g )$.) $\diamond$
Thus, if we happen to have a solution g to modified Ricci flow (36′) with the vector field X given by (42), then the equation (36′) simplifies to the Ricci-de Turck flow
$\frac{d}{dt} g = g^{\gamma \delta} \overline{\nabla}_\gamma \overline{\nabla}_\delta g - 2 \overline{\hbox{Ric}} + {\mathcal O}( g^{-2} \overline{\nabla} g \overline{\nabla} g )$. (43)
Conversely, it is not too difficult to reverse these steps and convert a solution to Ricci-de Turck flow to a solution to Ricci flow.
The equation (43) is a quasilinear parabolic evolution equation on g (which we think of now as evolving on a fixed background Riemannian manifold $(M, {\overline g})$, and one can establish local existence for (43) by a variety of methods. From this and the preceding remarks one can eventually establish Theorem 1, although we will not do so in detail here.
Remark 2. (This remark is intended primarily for experts in nonlinear PDE.) One particular way to establish existence for Ricci-de Turck flow (and probably not the most efficient) is sketched as follows. If one writes $g = \overline{g} + h$, then one can recast (43) as a heat equation against the fixed background metric that takes the form
$\frac{d}{dt} h - \overline{\Delta} h = {\mathcal O}( h \overline{\nabla}^2 h ) + F( h,\overline{\nabla h} )$ (44)
for some smooth function F depending on the background (assuming that h is small in $L^\infty$ norm so that one can compute the inverse $g^{-1} = (\overline{g} + h)^{-1}$ smoothly). The essentially semilinear equation (44) can be solved (for initial data small and smooth, and on small time intervals) on a compact manifold M by, say, the Picard iteration method, based on estimates such as the energy inequality
$\| u \|_{L^\infty_t H^k_x(I \times M)} + \| u \|_{L^2_t H^{k+1}_x(I \times M)} \ll_{I,k} \| u(0) \|_{H^k_x(M)} + \| F \|_{L^2_t H^{k-1}_x(I \times M)}$ (45)
for some suitably large integer k ($k > d/2 + 1$ will do), and with implied constants depending on the background metric, whenever u is a tensor that solves the heat equation $\frac{d}{dt}u - \overline{\Delta} u = F$. This energy estimate can be easily established by integration by parts. To expand in a little more detail: the Picard iteration method proceeds by constructing iterative approximations $h^{(n)}$ to a solution h of (44) by solving a sequence of inhomogeneous heat equations
$\frac{d}{dt} h^{(n)} - \overline{\Delta} h^{(n)} = {\mathcal O}( h^{(n-1)} \overline{\nabla}^2 h^{(n-1)} ) + F( h^{(n-1)},\overline{\nabla h^{(n-1)}} )$ (44′)
starting from $h^{(0)} = 0$ (say). The main task is to show that the sequence $h^{(n)}-h^{(n-1)}$ converges rapidly to zero in a suitable function space, such as $C^0_t H^k_x \cap L^2_t H^{k+1}_x$. This can be done by applying (45) with $u = h^{(n)}-h^{(n-1)}$ or $u = h^{(n)}$, and also using some product estimates in Sobolev spaces that are ultimately based on the Sobolev embedding theorem.
There is the still the issue of how to establish existence for the linear heat equation on tensors, but this can be done by functional calculus (once one establishes that $\Delta$ is a genuinely self-adjoint operator), or by making a reasonably accurate parametrix for the heat kernel. One (minor) advantage of this Picard iteration based approach is that it allows one to establish uniqueness and continuous dependence on initial data as well as just existence, and to show that the nonlinear solution obeys similar estimates (locally in time) to that of the linear heat equation. But uniqueness and continuity will not be necessary for the arguments in this course, and the estimates we need can always be established a posteriori by energy inequalities anyway. $\diamond$
Remark 3. The diffeomorphisms needed to convert solutions to Ricci-de Turck flow (43) back to solutions of Ricci flow (30) themselves obey a pleasant evolution equation; in fact, they evolve by harmonic map heat flow from the fixed domain $(M, \overline{g})$ to the target $(M, g(t))$. See Chapter 3.4 of Chow and Knopf’s book “Ricci Flow: an introduction” for further discussion. More generally, it seems that harmonic maps (and harmonic map heat flow, and harmonic coordinates) often provide natural coordinate systems that make various geometric PDE analytically tractable. On the other hand, for geometric arguments it seems better to work with the original Ricci flow; the de Turck diffeomorphisms seem to obscure many of the delicate monotonicity properties that are essential to the deeper understanding of Ricci flow, and are also not completely covariant as they rely on an arbitrary choice of background metric $\overline{g}$. $\diamond$
[Update, Apr 3: Minor corrections.]
|
2015-04-25 19:48:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 284, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9207677841186523, "perplexity": 290.94564086369263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246651471.95/warc/CC-MAIN-20150417045731-00080-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://electronics.stackexchange.com/questions/178688/calling-a-routine-every-x-miliseconds
|
# Calling a routine every x miliseconds
I'm working on a motion controller using the mbed platform. The idea is to calculate the theoretical position in the move every x milliseconds and then compare it to the actual position from the encoder, the resulting error of which will be subject to PID.
Currently I call the procedure every x milliseconds by using the delay() function at the end of the position calculation routine.
I feel this may not be the best way of doing this. Is there a lower level way of calling the routine with a fixed time step?
Pseudo code:
volatile int ms_counter;
volatile bool passed_1ms;
volatile boole passed_20ms;
(ISR executed every 1 ms through a timer interrupt - platform dependent)
ISR {
ms_counter++;
if (ms_counter % 20 == 0) {
passed_20ms = true;
}
passed_1ms = true;
}
main()
{
// configure timer to trigger an interrupt every 1ms
while(true) {
if (passed_1ms) {
// 1ms has passed, do something
passed_1ms = false;
}
if (passed_20ms) {
|
2019-05-25 23:22:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3459380865097046, "perplexity": 3222.866041205679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258453.85/warc/CC-MAIN-20190525224929-20190526010929-00048.warc.gz"}
|
http://experiment-ufa.ru/5(x-2)-3=3x-8x-7
|
# 5(x-2)-3=3x-8x-7
## Simple and best practice solution for 5(x-2)-3=3x-8x-7 equation. Check how easy it is, and learn it for the future. Our solution is simple, and easy to understand, so dont hesitate to use it as a solution of your homework.
If it's not what You are looking for type in the equation solver your own equation and let us solve it.
## Solution for 5(x-2)-3=3x-8x-7 equation:
5(x-2)-3=3x-8x-7
We move all terms to the left:
5(x-2)-3-(3x-8x-7)=0
We add all the numbers together, and all the variables
5(x-2)-(-5x-7)-3=0
We multiply parentheses
5x-(-5x-7)-10-3=0
We get rid of parentheses
5x+5x+7-10-3=0
We add all the numbers together, and all the variables
10x-6=0
We move all terms containing x to the left, all other terms to the right
10x=6
x=6/10
x=3/5
`
## Related pages
sin2x sinx144-120sin3x cos3xa pir24x 1 3x 1what are prime factors of 56296.34what is the greatest common factor of 120220-33x 2-4x factoredderivative ln x 3find the prime factorization of 20010x squaredwhat is 0.125 as a percentderivative cos3xsolve equation calculator with stepshow to solve y 2x 5factoring x 4-1111-1111cos xy derivativegcf of 98fraction equation solver calculatorsec 5xcosx cosx 1 0derivative sin4x2x 2y 14simplify 3n-2m 237-1001680xy 2y y 0derivative of ln xy83-20180-45vv nnnfraction least to greatest calculator2n squaredwhat is the prime factorization of 9712.25 as a fractionfind the prime factorization of 245ln x differentiation7.6.4cos3x sin2x7insfactorise expression calculatorgcf factor calculatorx4 y4what is the gcf of 84315.3120000 dollars in poundscot2x 1derivative of ln2x1.8 inches to fraction0.9375 as a fraction4x 3y 12a 2ab b 22.5 percent as a decimalprime factorization of 220what is the gcf of 9664 prime factorizationgraph sin 3xlxxii roman numeralswhat is the gcf of 100 and 120prime factorization for 38prime factorization 1471in in cm493.90360-180common multiples of 6 and 9roman numeral for 74subtracting fractions calculator show workprime factorization for 210derivative of 1 cos 2xsqrt 625graph y 3x 7what is the gcf of 64 and 964x 2y 2sqrt 625gcf of 26100-63
|
2017-11-22 14:40:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3417488634586334, "perplexity": 10111.749768131649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806609.33/warc/CC-MAIN-20171122141600-20171122161600-00137.warc.gz"}
|
https://forum.azimuthproject.org/plugin/ViewComment/17980
|
I'm imagining [@Anindya](https://forum.azimuthproject.org/profile/1950/Anindya%20Bhattacharyya)'s counterexample like a stubby jellyfish -- a big blob at the top with some bits hanging down that are incomparable with everything except the blob.
Left-multiplication fails incredibly easily in this order. If \$$x\$$ is not an A-word, and \$$y \le y'\$$ are distinct in any way, then prepending \$$x\$$ necessarily makes \$$x \otimes y\$$ and \$$x \otimes y'\$$ incomparable. For the same reason, right-multiplication easily preserves the order: it doesn't touch the first letter, so there's no chance of it disturbing the order.
|
2019-07-16 04:25:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6184811592102051, "perplexity": 2603.291294704034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524502.23/warc/CC-MAIN-20190716035206-20190716061206-00489.warc.gz"}
|
http://rightonphotography.com/3n76c2lz/7a1vxv.php?7b7e3f=multi-label-classification-neural-network
|
We use a simple neural network as an example to model the probability $P(c_j|x_i)$ of a class $c_i$ given sample $x_i$. Each object can belong to multiple classes at the same time (multi-class, multi-label). Tensorflow implementation of model discussed in the following paper: Learning to Diagnose with LSTM Recurrent Neural Networks. Efficient classification. This means we are given $n$ samples During training, RNNs re-use the same weight matrices at each time step. The purpose of this project is to build and evaluate Recurrent Neural Networks(RNNs) for sentence-level classification tasks. LSTMs gates are continually updating information in the cell state. If you are completely new to this field, I recommend you start with the following article to learn the basics of this topic. AUC is a threshold agnostic metric with a value between 0 and 1. as used in Keras) using DNN. Both of these tasks are well tackled by neural networks. It measures the probability that a randomly chosen negative example will receive a lower score than a randomly positive example. But now assume we want to predict multiple labels. In a multi-label text classication task, in which multiple labels can be assigned to one text, label co-occurrence itself is informative. Multi-label classification (e.g. It is clinically significant to predict the chronic disease prior to diagnosis time and take effective therapy as early as possible. Getting started with Multivariate Adaptive Regression Splines. The forget gate is responsible for deciding what information should not be in the cell state. I train the model on a GPU instance with five epochs. For example (pseudocode of what's happening in the network): Chronic diseases account for a majority of healthcare costs and they have been the main cause of mortality in the worldwide (Lehnert et al., 2011; Shanthi et al., 2015). The Planet dataset has become a standard computer vision benchmark that involves multi-label classification or tagging the contents satellite photos of Amazon tropical rainforest. Since then, however, I turned my attention to other libraries such as MXNet, mainly because I wanted something more than what the neuralnet package provides (for starters, convolutional neural networks and, why not, recurrent neural networks). I use the ROC-AUC to evaluate how effective are my models at classifying the different types. These problems occur due to the multiplicative gradient that can exponentially increase or decrease through time. It uses the sentence vector to compute the sentence annotation. For the above net w ork, let’s suppose the input shape of the image is (64, 64, 3) and the second layer has 1000 neurons. Ask Question ... will the network consider labels of the other products when considering a probability to assign to the label of one product? The rationale is that each local loss function reinforces the propagation of gradients leading to proper local-information encoding among classes of the corresponding hierarchical level. Each sample is assigned to one and only one label: a fruit can be either an apple or an orange. I only retain the first 50,000 most frequent tokens, and a unique UNK token is used for the rest. Multi-Label Text Classification using Attention-based Graph Neural Network. After loading, matrices of the correct dimensions and values will appear in the program’s memory. Considering the importance of both patient-level diagnosis correlating bilateral eyes and multi-label disease classification, we propose a patient-level multi-label ocular disease classification model based on convolutional neural networks. classifying diseases in a chest x-ray or classifying handwritten digits) we want to tell our model whether it is allowed to choose many answers (e.g. In a stock prediction task, current stock prices can be inferred from a sequence of past stock prices. Graph Neural Networks for Multi-Label Classification Jack Lanchantin, Arshdeep Sekhon, Yanjun Qi ECML-PKDD 2019. Multi-label Classification with non-binary outputs [closed] Ask Question Asked 3 years, 7 months ago. A new multi-modality multi-label skin lesion classification method based on hyper-connected convolutional neural network. Overview The dataset was the basis of a data science competition on the Kaggle website and was effectively solved. It is observed that most MLTC tasks, there are dependencies or correlations among labels. But we have to know how many labels we want for a sample or have to pick a threshold. Although RNNs learn contextual representations of sequential data, they suffer from the exploding and vanishing gradient phenomena in long sequences. The final document vector is the weighted sum of the sentence annotations based on the attention weights. For example what object an image contains. These matrices can be read by the loadmat module from scipy. Note: Multi-label classification is a type of classification in which an object can be categorized into more than one class. To get everything running, you now need to get the labels in a “multi-hot-encoding”. Often in machine learning tasks, you have multiple possible labels for one sample that are not mutually exclusive. Red dress (380 images) 6. So we set the output activation. Did you know that we have four publications? For example (pseudocode of what's happening in the network): In this paper, we propose a novel multi-label text classification method that combines dynamic semantic representation model and deep neural network (DSRM-DNN). Both should be equally likely. In Multi-Label Text Classification (MLTC), one sample can belong to more than one class. Gradient clipping — limiting the gradient within a specific range — can be used to remedy the exploding gradient. We will discuss how to use keras to solve this problem. So we would predict class 4. The softmax function is a generalization of the logistic function that “squashes” a $K$-dimensional vector $\mathbf{z}$ of arbitrary real values to a $K$-dimensional vector $\sigma(\mathbf{z})$ of real values in the range $[0, 1]$ that add up to $1$. The purpose of this project is to build and evaluate Recurrent Neural Networks (RNNs) for sentence-level classification … Some time ago I wrote an article on how to use a simple neural network in R with the neuralnet package to tackle a regression task. During the preprocessing step, I’m doing the following: In the attention paper, the weights W, the bias b, and the context vector u are randomly initialized. Multi-Label Text Classification using Attention-based Graph Neural Network. utilizedrecurrent neural networks (RNNs) to transform labels into embedded label vectors, so that the correlation between labels can be employed. XMTC has attracted much recent attention due to massive label sets yielded by modern applications, such as news annotation and product recommendation. if class $3$ and class $5$ are present for the label. $$y = {y_1, \dots, y_n}$$ However, for the vanishing gradient problem, a more complex recurrent unit with gates such as Gated Recurrent Unit (GRU) or Long Short-Term Memory (LSTM) can be used. Tensorflow implementation of model discussed in the following paper: Learning to Diagnose with LSTM Recurrent Neural Networks. for a sample (e.g. Extend your Keras or pytorch neural networks to solve multi-label classification problems. Learn more. $$P(c_j|x_i) = \frac{1}{1 + \exp(-z_j)}.$$ A word sequence encoder is a one-layer Bidirectional GRU. Semi-Supervised Robust Deep Neural Networks for Multi-Label Classification Hakan Cevikalp1, Burak Benligiray2, Omer Nezih Gerek2, Hasan Saribas2 1Eskisehir Osmangazi University, 2Eskisehir Technical University Electrical and Electronics Engineering Department hakan.cevikalp@gmail.com, {burakbenligiray,ongerek,hasansaribas}@eskisehir.edu.tr They learn contextual representation in one direction. LSTMs are particular types of RNNs that resolve the vanishing gradient problem and can remember information for an extended period. With the sigmoid activation function at the output layer the neural network models the probability of a class $c_j$ as bernoulli distribution. The rationale is that each local loss function reinforces the propagation of gradients leading to proper local-information encoding among classes of the corresponding hierarchical level. An AUC of 1.0 means that all negative/positive pairs are completely ordered, with all negative items receiving lower scores than all positive items. They are composed of gated structures where data are selectively forgotten, updated, stored, and outputted. The article also mentions under 'Further Improvements' at the bottom of the page that the multi-label problem can be … In this paper, a graph attention network-based model is proposed to capture the attentive dependency structure among the labels. The authors proposed a hierarchical attention network that learns the vector representation of documents. A common activation function for binary classification is the sigmoid function With the development of preventive medicine, it is very important to predict chronic diseases as early as possible. The multiple class labels were provided for each image in the training dataset with an accompanying file that mapped the image filename to the string class labels. Fastai looks for the labels in the train_v2.csv file and if it finds more than 1 label for any sample, it automatically switches to Multi-Label mode. Before we dive into the multi-label classifi c ation, let’s start with the multi-class CNN Image Classification, as the underlying concepts are basically the same with only a few subtle differences. Overview A famous python framework for working with neural networks is keras. The dataset includes 1,804,874 user comments annotated with their toxicity level — a value between 0 and 1. How to use keras to solve multi-label classification ( MLTC ), one output independently. 1.0 means that all negative/positive pairs are completely ordered, with all negative items receiving lower scores than all items. Information should be stored in the multi- label recognition task YouTube channel metric ) Browse State-of-the-Art methods Reproducibility development! Widely applied to discover the label of one product softmax is good for multi-label text classification ( MLTC ) one... Clinicians to make this work in keras we need to get everything running, you now need to the! Up a simple neural net with 5 output nodes, one sample can belong to multiple classes at the of. Data are selectively forgotten, updated, stored, and a unique UNK token is for... I 'm training a neural network models the probability that a randomly positive example for text classification, each has... Tasks, there are dependencies or correlations among labels additional columns networks is.... Pathogeny of chronic disease is fugacious and complex thresholding methods questions tagged neural-networks classification keras or your. Attention mechanisms were also widely applied to discover the label of one product multiple topics, in multiple... Mamitsuka, and Qi 2019 ) check out the excellent documentation photos of Amazon tropical rainforest the! Activation function at the same time ( multi-class, multi-label classification problems using 2019! One product at classifying the different types is responsible for determining what information should be... Sequence encoder, and Shanfeng Zhu observed that most MLTC tasks, there are many applications where assigning attributes. Possible labels for one sample can belong to multiple classes at the output layer extreme multi-label text classification multi-label! One class lstms gates are multi label classification neural network updating information in the multi- label recognition task labels by thresholding methods, &. Dataset has 43 additional columns select semantic words evaluate Recurrent neural networks MLTC. Advance, because the pathogeny of chronic disease prior to diagnosis time and take effective therapy early... Classification, where a document can have multiple topics attention weights metric with a value between 0 1! Not mutually exclusive much of the other products when considering a probability to assign to the attention. Information in the cell state predicting zero or more class labels distributions per.. The rest years, 7 months ago memory than the standard stack of MULAN MEKA. Recommend you start with the development of preventive medicine, it is observed that most tasks. Less memory than the standard stack of MULAN, MEKA & WEKA and Shanfeng Zhu dataset includes 1,804,874 user annotated... I only retain the first 50,000 most frequent tokens, and outputted the as... To 9 ) gradient within a sentence and computes their vector annotations than standard! Going into much of the detail of this tutorial, let ’ s memory or characters answer. Is called a multi-class, multi-label ) were introduced in [ Hierarchical attention network that the... Encoder is a one-layer bidirectional GRU chronic diseases as early as possible them... Paper: learning to Diagnose with LSTM or only one answer ( e.g softmax layer network the. ( multi-class, multi-label classification ( MLTC ), one sample that are not present in my corpus that not... This is nice as long as we only want to predict the chronic disease is fugacious and.... Past stock prices itself is informative output node independently much recent attention due to the gradient! ) 4321.32, the peptide sequence could be WYTWXTGW resolve the vanishing gradient phenomena in long.! Some love by following our publications and subscribing to our YouTube channel MLTC tasks, there dependencies! Randomly positive example classification, each sample has a set of objects into n-classes it consists of a. Want to predict multiple labels can be categorized into more than one class sentence in document... So there is no need to compile the model 0 ∙ share classification ( Lanchantin, Sekhon, Qi! Networks used for filtering online posts and comments, social media policing, and Qi 2019 ) of. [ Hierarchical attention network that multi label classification neural network the vector representation of documents work in keras we to... The threshold $0.5$ as bernoulli distribution zero or more class labels is proposed to the... Many applications where assigning multiple attributes to an image is necessary neural networks to solve multi-label classification.! We have to know how many labels we want for a sample or have to a... And can remember information for an extended period correlations among labels ( DCNet ) is designed to the... Within a sentence and computes their vector annotations we need to assign to the label of one?! Text, label co-occurrence itself is informative probability of a class ... Text representation each time step of the word annotations based on the attention weights final vector... ) is designed to tackle the problem word embedding model and clustering to! Correlation network ( DCNet ) is designed to tackle the problem s see what happens if we the... Problems that require sequential data processing human life in this exercise, a word-level attention layer output. Relu, Tanh, and models with the development of preventive medicine, it observed.
Walmart Pressure Washer Rental, Pyramid Scheme Gif, Shamari Fears Bring It On, Shirley Community Season 6, Paint Sealer B&q, Transferwise Vs Western Union Reddit,
|
2021-04-14 10:28:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4247967302799225, "perplexity": 1563.616351274505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077810.20/warc/CC-MAIN-20210414095300-20210414125300-00066.warc.gz"}
|
https://www.fharrell.com/post/re/index.html
|
# Longitudinal Data: Think Serial Correlation First, Random Effects Second
drug-evaluation
endpoints
measurement
RCT
regression
2022
Most analysts automatically turn towards random effects models when analyzing longitudinal data. This may not always be the most natural, or best fitting approach.
Author
Department of Biostatistics
Vanderbilt University School of Medicine
Published
March 15, 2022
Random effects/mixed effects models shine for multi-level data such as measurements within cities within counties within states. They can also deal with measurements clustered within subjects. There are at least two contexts for the latter: rapidly repeated measurements where elapsed time is not an issue, and serial measurements spaced out over time for which time trends are more likely to be important. An example of the first is a series of tests on a subject over minutes when the subject does not fatigue. An example of the second is a typical longitudinal clinical trial where patient responses are assessed weekly or monthly. For the first setup, random effects are likely to capture the important elements of within-subject correlation. Not so much for the second setup, where serial correlation dominates and time ordering is essential.
A random effects model that contains only random intercepts, which is the most common use of mixed effect modeling in randomized trials, assumes that the responses within subject are exchangeable. This can be seen from the statement of the linear mixed effects model with random intercepts. For the $$i$$th subject assessed on the $$j$$th occasion we have $$Y_{ij} = X_{i}\beta + u_{i} + \epsilon_{ij}$$ where random effects $$u$$ might be assumed to have a normal distribution with mean zero and variance $$\delta^2$$. Residuals $$\epsilon$$ are irreducible errors assumed to represent white noise and are all independent of one another. The $$\epsilon$$s don’t know any subject boundaries.
Note from the linear mixed model statement that time plays no role in $$u$$ or $$\epsilon$$. Time may play a role as a fixed effect in $$X$$, but not in the components encoding intra-subject correlation. Shuffling the time order of measurements within subject does not affect the correlation (nor the final parameter estimates, if time is not part of $$X$$). Thus the multiple measurements within subject are exchangeable and the forward flow of time is not respected. This induces a certain correlation structure within subject: the compound symmetric correlation structure. A random intercept model assumes that the correlation between any two measurements on the same subject is unrelated to the time gap between the two measurements. Compound symmetry does not fit very well for most longitudinal studies, which instead usually have a serial correlation structure in which the correlation between two measurements wanes as the time gap widens. Serial correlations can be added on top of compound symmetry, but as this is not the default in SAS PROC MIXED this is seldom used in the pharmaceutical industry.
I’ve heard something frightening from practicing statisticians who frequently use mixed effects models. Sometimes when I ask them whether they produced a variogram to check the correlation structure they reply “what’s that?”. A variogram is a key diagnostic for longitudinal models in which the time difference between all possible pairs of measurements on the same subject is played against the covariance of the pair of measurements within subject. These data are pooled over subjects and an average is computed for each distinct time gap occurring in the data, then smoothed. See RMS Course Notes Section 7.8.2 Figure 7.4 for an example.
Faes et al 2009 published a highly useful and intuitive paper on estimating the effective sample size for longitudinal data under various correlation structures. They point out an interesting difference between a compound symmetric (CS) correlation structure and a first-order autoregressive serial correlation structure (AR(1)). Under compound symmetry there is a limit to the information added by additional observations per subject, whereas for AR(1) there is no limit. They explained this thus: “Under CS, every measurement within a cluster is correlated in the same way with all other measurements. Therefore, there is a limit to what can be learned from a cluster and the additional information coming from new cluster members approaches zero with increasing cluster size. In contrast, under AR(1), the correlation wanes with time lag. So, with time gap between measurements tending to infinity, their correlation tends to zero and hence the additional information tends to that of a new, independent observation.”
Random intercepts comprise $$N$$ parameters for $$N$$ subjects. Even though the effective number of parameters is smaller than $$N$$, the large number of parameters results in a computational burden and convergence issues. Random intercept models are extended into random slope-and-intercept models and random shape models but these entail even more parameters and may be harder to interpret. In addition, when there is an absorbing state or a level $$y$$ of the response variable $$Y$$ such that when a subject has $$Y\geq y$$ she never recovers to $$Y<y$$, these situations cannot be handled by random effects models. When an incorrect correlation structure is assumed, $$\delta^2$$ and the effective number of parameters estimated may be large. See this for an example where modeling the correlation structure correctly made the random effects inconsequential.
The first model for longitudinal data was the growth curve model. See Wishart 1938 and Potthoff and Roy 1964. Multivariate normality was assumed and no random effects were used. Generalized least squares is based on these ideas, and can incorporate multiple types of correlation structures without including any random effects. Markov models (see especially here and here for references) are more general ways to incorporate a variety of correlation structures with or without random effects. Markov models are more general because they easily extend to binary, nominal, ordinal, and continuous $$Y$$. They are computationally fast and require only standard frequentist or Bayesian software until one gets to the post-model-fit stage of turning transition probabilities into state occupancy probabilities. A first-order Markov process models transitions from one time period to the next, conditioning the transition probabilities on the response at the previous period as well as on baseline covariates. To be able to use this model you must have the response variable assessed at baseline. Responses at previous time periods are treated exactly like covariates in the period-to-period transition models. Semiparametric models are natural choices for modeling the transitions, allowing $$Y$$ to be binary, ordinal, or continuous. Multiple absorbing states can be handled. Once the model is fitted, one uses a recursive matrix multiplication to uncondition on previous states, yielding current status probabilities, also called state occupancy probabilities. An example of a first-order proportional odds Markov transition model that has quite general application to longitudinal data is below. Let $$Y(t)$$ be the response assessed at time period $$t$$, where $$t=1,2,3,...$$.
$P(Y(t)\geq y | X, Y(t-1)) = \mathrm{expit}(\alpha_{y} + X\beta + f(Y(t-1), t))$
Here the $$\alpha$$s are intercepts, and there are $$k-1$$ of them when $$Y$$ takes on $$k$$ distinct values. $$\mathrm{expit}(x)$$ is $$\frac{1}{1 + \exp(-x)}$$ and the function $$f$$ expresses how you want to model the effect of the previous state. This may require multiple parameters, all of which are treated just like $$\beta$$. The strength of effect of $$Y(t-1)$$ goes along with the strength of the intra-subject correlation, and involvement of $$t$$ adds further flexibility in correlation patterms.
Generalized estimating equations (GEE) is a flexible way to model longitudinal responses, but it has some disadvantages: it is a large sample approximate method; it does not use a full likelihood function so cannot be used in a Bayesian context; not being full likelihood the repeated observations are not properly “connected” to each other, so dropouts and missed visits must be missing completely at random, not just missing at random as full likelihood methods require. Generalized least squares, Markov models when no random effects are added, and GEE are all examples of marginal models, marginal meaning in the sense of not being conditional on subject so not attempting to estimate individual subjects’ trajectories.
Mixed effects conditional (on subject) models are indispensable when a goal is to estimate the outcome trajectory for an individual subject. When the goal is instead to make group level estimates (e.g., treatment differences in trajectories) then one can do excellent analyses without using random effects. Above all, don’t default to only using random intercepts to handle within-subject correlations of serial measurements. This is unlikely to fit the correlation structure in play. And it will not lead to the correct power calculation for your next longitudinal study.
|
2022-12-03 06:33:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5377257466316223, "perplexity": 675.7589374687614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710924.83/warc/CC-MAIN-20221203043643-20221203073643-00115.warc.gz"}
|
https://api-project-1022638073839.appspot.com/questions/56121f1411ef6b16ff967dbe
|
# Question 67dbe
Oct 5, 2015
Empirical Formula is ${C}_{4} {H}_{5}$ and the molecular formula is ${C}_{8} {H}_{10}$.
#### Explanation:
$\text{Number of moles of organic compound} = \frac{0.2612}{106}$
$\implies n = 2.4642 \times {10}^{-} 3 m o l$
$\text{Number of moles of } C {O}_{2} = \frac{0.8661}{44}$
$\implies n = 0.0197 m o l$
$\text{Number of moles of } {H}_{2} O = \frac{0.2250}{18}$
$\implies n = 0.0125 m o l$
The equation for the reaction is:
${C}_{a} {H}_{b} + a \cdot b \cdot {O}_{2} \to a \cdot C {O}_{2} + \frac{b}{2} \cdot {H}_{2} O$
Mole ratio of CO_2:"organic compund"= 0.0197/(2.4642xx10^-3
$= 8 : 1$
This means $8 m o l$ of $C {O}_{2}$ is produced for burning $1 m o l$ of the organic compound. The value of $a$ is therefore 8.
Mole ratio of H_2O: "organic compound" = 0.0125/(2.4642xx10^-3#
$\approx 5 : 1$
This shows $5 m o l$ of ${H}_{2} O$ is produced. So, $\frac{b}{2} = 5$, therefore $b$ is equal to 10.
Hence, the organic compound has molecular formula $C 8 H 10$. It's empirical formula would hence be ${C}_{4} {H}_{5}$. The name of this compound is Xylene, or dimethyl benzene.
**n.b. this is not an orthodox method. If this question is asked in an exam, you may not be awarded any marks for using this method.
|
2020-04-05 01:17:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 23, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8018187284469604, "perplexity": 2113.260470643782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370526982.53/warc/CC-MAIN-20200404231315-20200405021315-00075.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-2x-8-20
|
# How do you solve |-2x + 8| <20?
Apr 17, 2018
$x > - 6$, $x < 14$
#### Explanation:
$| - 2 x + 8 | < 20$
We can have solutions, where the terms inside the absolute value is $- \left(- 2 x + 8\right)$ or $\left(- 2 x + 8\right)$. So we need to solve for both of these
$\cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot$
$- \left(- 2 x + 8\right) < 20$
$- 2 x + 8 > - 20$
note that the sign changed direction. This occurs anytime we multiply or divide by a negative number
$- 2 x > - 20 - 8$
$- x > - \frac{28}{2}$
$x < 14$
$\cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot$
$- 2 x + 8 < 20$
$- 2 x < 20 - 8$
$- 2 x < 12$
$- x < 6$
$x > - 6$
To check our work, let's graph the original function and see if it is less than $14$ and greater than $- 6$
Yep, it is, so we were right!
|
2019-11-17 18:26:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 19, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8569435477256775, "perplexity": 451.3784490947712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669225.56/warc/CC-MAIN-20191117165616-20191117193616-00540.warc.gz"}
|
https://gateoverflow.in/303592/grammar-doubt
|
47 views
L is regular <=> there exists a linear grammar for L.
Which way is it true and which way is it false?
| 47 views
0
The biconditional is true. (which means it correct both ways)
https://gateoverflow.in/303591/cfg-doubt
and more over if
Let L is: A-> a (RG) and also Linear G
and S-> aSb | epsilon (which is linear. But for it we can't drive any RG)
by Active (1.5k points)
|
2020-01-26 10:00:55
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8059635758399963, "perplexity": 14221.213713033096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251687958.71/warc/CC-MAIN-20200126074227-20200126104227-00232.warc.gz"}
|
http://parafia.pstragowa.pl/nba-trade-bqdcg/e76b4f-svm-image-classification-python
|
# svm image classification python
Support Vector Machines (SVMs) are widely applied in the field of pattern classifications and nonlinear regressions. What if we want a computer to recognize an image? This is the fourth blog in the five series tutorial. First, let’s load the iris dataset, create our training and testing data, and fit our SVM. These are the four steps we will go through. We’ll be using Python 3 to build an image recognition classifier which accurately determines the house number displayed in images from Google Street View. This is just a pseudo code, and the main part of the code is importing images and labels, which we have handled in the CNN tutorial. November 14, 2016 By 88 Comments. For standard image inputs, the tool accepts multiband imagery with any bit depth, and it will perform the SVM classification on a pixel basis, based on the input training feature file. Linear Support Vector Machine – Binary Image Classification . Case Study: Solve a Multi-Label Image Classification Problem in Python . What is Support Vector Machine? CNN is a feed-forward neural network and it assigns weights to images scanned or trained and used to identify one image from the other and before you proceed to learn, know-saturation, RGB intensity, sharpness, exposure, etc of images; Classification using CNN model. Simple Tutorial on SVM and Parameter Tuning in Python and R. Introduction Data classification is a very important task in machine learning. We will apply global feature descriptors such as Color Histograms, Haralick Textures and Hu Moments to extract features from FLOWER17 dataset and use machine learning models to learn and predict. In this document, we are going to build a very basic Classification model using the SVM Algorithm in Python. From there, our Linear SVM is trained and evaluated: Figure 2: Training and evaluating our linear classifier using Python, OpenCV, and scikit-learn. What is SVM? Let’s extract the images by running the following code. Did "Antifa in Portland" issue an "anonymous tip" in Nov that John E. Sullivan be “locked out” of their circles because he is "agent provocateur"? Image classification is a method to classify the images into their respective category classes using some method like : Let’s discuss how to train model from scratch and classify the data containing cars and planes. The CNN Image classification model we are building here can be trained on any type of class you want, this classification python between Iron Man and Pikachu is a simple example for understanding how convolutional neural networks work. Hello friends! Model Building: We will use a pre-trained model Densenet 121 to predict the image [UPDATE] Now, you can … This piece will also cover how the Inception network sees the input images and assess how well the extracted features can be classified. c) Implementation steps in Applied Machine Learning project. To … Object detection 2. Why does my advisor / professor discourage all collaboration? Are you working with image data? Thanks for contributing an answer to Stack Overflow! So you see, feature extraction is the main part of traditional ML algorithms, and training these is just one line of code. For more theory, I suggest going through Christopher M Bishop’s book on Pattern Recognition and Machine Learning. Let’s try this with a Support Vector Machine classifier, but before I suggest you to go through my article on Binary Classification, because I will use the same classification problem so that you can understand the difference between training a binary classification and a multiclass classification. Do you know any example as the second but using Python? Classification of images also can be performed using SVMs. Does Python have a string 'contains' substring method? >>> from sklearn.model_selection import GridSearchCV >>> parameters_svm = {'vect__ngram_range': [(1, … Global features, which are usually topological or statistical. In this article, we will learn about the intuition behind SVM classifier, how it classifies and also to implement an SVM classifier in python. First of all, when do we use Classification? For segmented rasters that have their key property set to Segmented, the tool computes the index image and associated segment attributes from the RGB segmented raster. Image translation 4. Making an image classification model was a good start, but I wanted to expand my horizons to take on a more challenging tas… Problem formulation. The first and initial step in predictive modelling machine learning is to define and formalise a problem. This repo contains the code to perform a simple image classification task using Python and Machine Learning. Subsequently, the entire dataset will be of shape (n_samples, n_features), where n_samples is the number of images and n_features is the total number of pixels in each image. Would a vampire still be able to be a practicing Muslim? Once your training phase completed it will output to which class the given image belong.If its in banana class you can output as Yes otherwise No. You can download images from the web and to make a big dataset in no time, use an annotation tool like Dataturks, where you upload the images and tag images manually in a ziffy. With SVM you can classify set of images.For example You can train svm with set of car and plane images.Once you trained it can predict the class of an unknown images as whether it is car or plane.There is also multiclass SVM. \$ python linear_classifier.py --dataset kaggle_dogs_vs_cats The feature extraction process should take approximately 1-3 minutes depending on the speed of your machine. So do we have to depend on others to provide datasets? First we should flatten the images n_samples = len(digits.images) data_images = digits.images.reshape( (n_samples, -1)) Before apply a classifier to the data, let's split the data into a training set and a test set. A short clip of what we will be making at the end of the tutorial Flower Species Recognition - Watch the full video here So let’s resize the images using simple Python code. What is the highest road in the world that is accessible by conventional vehicles? Image classification using SVM . In this article we will be solving an image classification problem, where our goal will be to tell which class the input image belongs to.The way we are going to achieve it is by training an artificial neural network on few thousand images of cats and dogs and make the NN(Neural Network) learn to predict which class the image belongs to, next time it sees an image having a cat or dog in it. In this article we will be solving an image classification problem, where our goal will be to tell which class the input image belongs to.The way we are going to achieve it is by training an artificial neural network on few thousand images of cats and dogs and make the NN(Neural Network) learn to predict which class the image belongs to, next time it sees an image having a cat or dog in it. Radial Basis Function Kernel – The radial basis function kernel is commonly used in SVM classification, it can map the space in infinite dimensions. Absolutely not. Classify spectral remote sensing data using Support Vector Machine (SVM). How to save model 4. Local features, which are usually geometric. Geometric margin on the other hand, is the normalised version of funcional margin and tells us about the euclidean distance between the hyperplane(or linear classifier) and the data points. Until now, you have learned about the theoretical background of SVM. Degree of confidence measure the probability of misclassification. ... Paul Torres in Python In Plain English. [UPDATE] Now, you can simply run organize_flowers17.py script to download and … Classification¶ To apply a classifier on this data, we need to flatten the images, turning each 2-D array of grayscale values from shape (8, 8) into shape (64,). Let’s import an annotated dataset from dataturks website. It’s your turn to try them out…, DataTurks: Data Annotations Made Super Easy, def fd_haralick(image): # convert the image to grayscale, global_feature = np.hstack([fd_histogram(image), fd_haralick(image), fd_hu_moments(image)]), More from DataTurks: Data Annotations Made Super Easy, Algorithmic trading simplified: Value at Risk and Portfolio Optimization, 9 Datasets for Data Science + ML Beginners, Implementation of Simple Linear Regression using formulae, Replication Crisis, Misuse of p-values and How to avoid them as a Data Scientist[Part — I], It is necessary to find all possible feature subsets that can be formed from the initial set of data, Every feature is meaningful for at least some of discriminations, and. … Python | Image Classification using keras. What is a "Major Component Failure" referred to in news reports about the unsuccessful Space Launch System core stage test firing? Features can be classified into two categories: Feature Extraction algorithms can be classified into three categories. Kernel functions¶ The kernel function can be any of the following: linear: $$\langle x, x'\rangle$$. This post explains the implementation of Support Vector Machines (SVMs) using Scikit-Learn library in Python. Is Harry Potter the only student with glasses? This is a multipart post on image recognition and object detection. Support Vector Machine or SVM algorithm is a simple yet powerful Supervised Machine Learning algorithm that can be used for building both regression and classification models. March 7, 2018 September 10, 2018 Adesh Nalpet computer vision, image classification, SVM. If you are not aware of the multi-classification problem below are examples of multi-classification problems. Download Dataset. Our puller project with Tensorflow. The last one is on Reinforcement Learning. How to have multiple arrows pointing from individual parts of one equation to another? SVM Image Classification. The implementation is based on libsvm. Join Stack Overflow to learn, share knowledge, and build your career. Have fun learning! Or if you have your unique use case, you can create your very own dataset for it. Download the spectral classification teaching data subset. Go ahead and try your own… Do let me know your results at lalith@datatuks.com. We had discussed the math-less details of SVMs in the earlier post. Image Classification using Python and Machine Learning. steps = [ ('scaler', StandardScaler ()), ('SVM', SVC (kernel='poly'))] pipeline = Pipeline (steps) # define Pipeline object. Our goal will be to perform image classification and hence tell which class the input image belongs to. And the second example is in Java but seems to be a great example. Thanks a lot. Tags: C++ Histogram of Oriented Gradients HOG Python Support Vector Machine SVM. Is there any template to use in Python? When you want to classify an image, you have to run the image through all 45 classifiers and see which class wins the most duels. We will do this by training an artificial neural network on about 50 images of Iron Man and Pikachu and make the NN(Neural Network) learn to predict which class the image belongs to, next time it sees an image having Iron Man or Pikachu in it. Here I’ll discuss an example about SVM classification of cancer UCI datasets using machine learning tools i.e. This is left up to you to explore more. SVM Multiclass Classification in Python. This class takes one parameter, which is the kernel type. Take a look at the following script: from sklearn.svm import SVC svclassifier = SVC (kernel= 'sigmoid' ) svclassifier.fit (X_train, y_train) To use the sigmoid kernel, you have to specify 'sigmoid' as value for the kernel parameter of the SVC class. This dataset is computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. Additional Materials. With SVM you can classify set of images.For example You can train svm with set of car and plane images.Once you trained it can predict the class of an unknown images as whether it is car or plane.There is also multiclass SVM. Image Classification Image Recognition Machine Learning Object Detection Tutorial. By now, you have implemented CNNs, Word Embeddings and SVMs… So we have a feel for computer vision and natural language processing. Image files used are from https://github.com/Abhishek-Arora/Image-Classification-Using-SVM. I am using opencv 2.4,python 2.7 and pycharm. Our aim is to build a system that helps a user with a zip puller to find a matching puller in the database. According to many experimental results, it shows that SVM always achieves significantly higher search accuracy than traditional query refinement schemes after more than two rounds (best practice three to four rounds) of relevance feedback. An algorithm that intuitively works on creating linear decision boundaries to classify multiple classes. The speciality of CNNS is that feature extraction seems to be a cakewalk, as convolution takes care of the process of feature extraction with pooling. How to Save data by Pickle 3. Additionally, we talked about the implementation of Kernel SVM in Python and Sklearn, which is a very useful method while dealing with … Set of images that contain given characteristics(banana) Set of images that doesn't contain that characteristics; Once your training phase completed it will output to which class the given image … It becomes important so as to hide content from a certain set of audiences. Once we have imported the dataset, let’s classify the images using SVMs. Let’s use Global Features for our task. It is widely used in pattern recognition and computer vision. Here is various image classification datasets. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. To decide on the value of C, gamma we will use the GridSearchCV method with 5 folds cross-validation. genus takes the value of either 0.0 (Apis or honey bee) or 1.0 … Object tracking (in real-time), and a whole lot more.This got me thinking – what can we do if there are multiple object categories in an image? Here is the workflow for the end-to-end model-Setting up the Project WorkFlow. How can internal reflection occur in a rainbow if the angle is less than the critical angle? Since we are going to perform a classification task, we will use the support vector classifier class, which is written as SVC in the Scikit-Learn's svm library. Svm classifier mostly used in addressing multi-classification problems. Welcome back… In this fourth tutorial we are going to understand Support Vector Machines. We developed two different classifiers to show the usage of two different kernel functions; Polynomial and RBF. This process of concatenation reduces the correlation between features thus making linear classification more efficient. SVM which stands for Support Vector Machine is one of the most popular classification algorithms used in Machine Learning. Classification Of Images. Classification of Hyperspectral Data with Support Vector Machine (SVM) Using SciKit in Python Authors: Paul Gader Last Updated: Dec 11, 2020 July 27, 2018 By 3 Comments. Tags: C++ Histogram of Oriented Gradients HOG Python Support Vector Machine SVM. We can download the dataset in the form of a JSON file, which has the image URL and its label as its parameters. Then we’ll discuss how SVM is applied for the multiclass classification problem. We’ll first see the definitions of classification, multiclass classification, and SVM. 5 min read. Support Vector Machine as Image Classifier2. Introduction to OpenCV; Gui Features in OpenCV; Core Operations; Image Processing in OpenCV; Feature Detection and Description; Video Analysis; Camera Calibration and 3D Reconstruction; Machine Learning. In Python, we can easily compute for the mean image by using np.mean. scikit-learn compatible with Python. your coworkers to find and share information. whether it is a ‘classification’ or ‘regression’ or ‘clustering’ problem. SVM being a supervised learning algorithm requires clean, annotated data. 8 mins read Introduction . The SVC method of svm creates c support vector classification. To learn more, see our tips on writing great answers. A functional margin tells you about the accuracy of classification of a point. Creating dataset using Bing/ Google Image search APIS and then labelling them using Dataturks tool simplifies the entire process, and adds flexibility to the process of machine learning. In your case,Make two sets of images for training SVM. The main advantage of OvO is that each classifier only needs to be trained on the part of the training set for the two classes that it must distinguish. Humans generally recognize images when they see and it doesn’t require any intensive training to identify a building or a car. Set of images that contain given characteristics(banana), Set of images that doesn't contain that characteristics. A data scientist (or machine learning engineer or developer) should investigate and characterise the problem to better understand the objectives and goals of the project i.e. Whereas, there is no car in image 2 – only a group of buildings. First of all, when do we use Classification? November 14, 2016 By 88 Comments. How was the sound for the Horn in Helms Deep created? We will look at the power of SVMs for classification. Don’t worry if these terms feel new to you! SVM using Scikit-Learn in Python. For each of the images, we will predict the category or class of the image using an image classification model and render the images with categories on the webpage. Not only can it efficiently classify linear decision boundaries, but it can also classify non-linear boundaries and solve linearly inseparable problems. numpy; gdal; matplotlib; matplotlib.pyplot; Download Data. Kushashwa Ravi Shrimali. To achieve this, we will create a classifier by importing the svm as we imported datasets from sklearn: >>> from sklearn import svm >>> classify = svm.SVC(gamma=0.001) The main purpose of this is to slice or separate the images and labels. There are so many things we can do using computer vision algorithms: 1. What is Multi-Label Image Classification? Image Classification is a pivotal pillar when it comes to the healthy functioning of Social Media. August 01, 2017. Since the iris dataset has 4 features, let’s consider only the first two features so we can plot our decision regions on a 2D plane. Some other important concepts such as SVM full form, pros and cons of SVM algorithm, and SVM examples, are also highlighted in this blog . In this post, we will look into one such image classification problem namely Flower Species Recognition, which is a hard problem because there are millions of flower species around the world. How do I merge two dictionaries in a single expression in Python (taking union of dictionaries)? Making statements based on opinion; back them up with references or personal experience. Finally, we’ll look at Python code for multiclass classification using This is very important. We’ve used Inception to process the images and then train an SVM classifier to recognise the object. As a basic two-class classifier, support vector machine (SVM) has been proved to perform well in image classification, which is one of the most common tasks of image processing. data set for image classification in Machine learning Python. Similarly, we get improved accuracy ~89.79% for SVM classifier with below code. Update (03/07/2019): As Python2 faces end of life, the below code only supports Python3. OpenCV-Python Tutorials. SVM Algorithm in Machine Learning. Let you have basic understandings from this article before you proceed further. This repo contains the code to perform a simple image classification task using Python and Machine Learning. To know how many digits were misclassified we can print out the Confusion … SVM being a supervised learning algorithm requires clean, annotated data. Selecting the most meaningful features is a crucial step in the process of classification problems because: The selected set of features should be a small set whose values efficiently discriminate among patterns of different classes, but are similar for patterns within the same class. If you wanna learn more about pipeline and … In your case,Make two sets of images for training SVM. If you are not aware of the multi-classification problem below are examples of multi-classification problems. Image classification is a image processing method which to distinguish between different categories of objectives according to the different features of images. Statistical Features: The features are derived from statistical distribution of points, resulting in high speed and lower complexity features. Installation. I would like to implement a classifier using SVM with output yes or no the image contains the given characteristics. Data classification is a very important task in machine learning. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Support Vector Machine or SVM is a supervised and linear Machine Learning algorithm most commonly used for solving classification problems and is also referred to as Support Vector Classification. Chervonenkis in 1963. Face Detection. rev 2021.1.18.38333, Stack Overflow works best with JavaScript enabled, Where developers & technologists share private knowledge with coworkers, Programming & related technical career opportunities, Recruit tech talent & build your employer brand, Reach developers & technologists worldwide, Thank for your answer. The 1st example is not really adaptable to my case because the pictures in the Hand Written digit Recognition are array of 64 elements. Let’s understand the concept of multi-label image classification with an intuitive example. Following is the RBF kernel equation. site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. We have a detailed tutorial on CNNs. The feature extraction is an important engineering process, for it is the main information about the raw data, that the algorithm identifies. Yess, you read it right… It can also be used for regression problems. Support vector machines are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Now you will learn about its implementation in Python using scikit-learn.In the model the building part, you can use the cancer dataset, which is a very famous multi-class classification problem. Svm classifier implementation in python with scikit-learn. Thus, we start off initially with feature extraction. Image segmentation 3. Hey everyone, today’s topic is image classification in python. Figure 2: Examples of digit classification on training data-set. In this Data Science Recipe, the reader will learn, a) Different types of Machine Learning problems. In the case of a simple SVM we simply set this parameter as "linear" since simple SVMs can only classify linearly separable data. I am currently working on a projet to perform image recognition. Training a Multiclass Classification Model Resize. The downloaded images may be of varying pixel size but for training the model we will require images of same sizes. Support Vector Machine Use Cases. Machine Learning. Variations within intraclass and between inter-class is not too much high. Support vector machine classifier is one of the most popular machine learning classification algorithm. Support vector machine classifier is one of the most popular machine learning classification algorithm. It is implemented as an image classifier which scans an input image with a sliding window. Feature extraction in the case of SVMs is really important. For implementing SVM in Python we will start with the standard libraries import as follows − import numpy as np import matplotlib.pyplot as plt from scipy import stats import seaborn as sns; sns.set() Next, we are creating a sample dataset, having linearly separable data, from sklearn.dataset.sample_generator for classification using SVM − Thanks a lot, Hand written Digit Recognition using python opencv. In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. This is a multipart post on image recognition and object detection. Resize each image; convert to gray scale; find PCA; flat that and append it to training list; append labels to training labels; Sample code is Support vector machine is a popular classification algorithm. We’ll be discussing the inner workings of this classification … We will apply global feature descriptors such as Color Histograms, Haralick Textures and Hu Moments to extract features from FLOWER17 dataset and use machine learning models to learn and predict. ... November 14, 2016 88 Comments. Check out the below image: The object in image 1 is a car. While my pictures are RGB pictures size of 170 * 400. sklearn.svm.SVC¶ class sklearn.svm.SVC (*, C = 1.0, kernel = 'rbf', degree = 3, gamma = 'scale', coef0 = 0.0, shrinking = True, probability = False, tol = 0.001, cache_size = 200, class_weight = None, verbose = False, max_iter = - 1, decision_function_shape = 'ovr', break_ties = False, random_state = None) [source] ¶ C-Support Vector Classification. In this document, we are going to build a very basic Classification model using the SVM Algorithm in Python. Classifying data using Support Vector Machines (SVMs) in Python. You can download pre-exiting datasets of various use cases like cancer detection to characters in Game of Thrones. As you can see in the images above, all of them except one was correctly classified (I think the image (1,1) is digit 7 and not 4). The file is loaded labels.csv into a dataframe called labels, where the index is the image name and the genus column tells us the bee type. In machine learning, the dataset entirely decides the fate of the algorithms. The set-up behind the Multiclass SVM Loss is that for a query image, the SVM prefers that its correct class will have a score higher than the incorrect classes by some margin $$\Delta$$. In this tutorial we are going to learn:1. July 27, 2018 3 Comments. Since then, SVMs have been transformed tremendously to be used successfully in many real-world problems such as text (and hypertext) categorizati… 8 D major, KV 311', (Un)computability of a restricted Halting Problem. Hence we define terms functional margin and geometric margin. Stack Overflow for Teams is a private, secure spot for you and As a basic two-class classifier, support vector machine (SVM) has been proved to perform well in image classification, which is one of the most common tasks of image processing. In this machine learning tutorial, we cover a very basic, yet powerful example of machine learning for image recognition. Classification is used to … The following Python code shows an implementation for building (training and testing) a multiclass classifier (3 classes), using Python 3.7 and Scikitlean library. Support Vector Machines (SVMs) are widely applied in the field of pattern classifications and nonlinear regressions. Does Python have a ternary conditional operator? In [8]: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(data_images,digits.target) print('Training data and target sizes: \n{}, … Here is the previous post in the series on word embeddings. Manually raising (throwing) an exception in Python. In this tutorial, we’ll introduce the multiclass classification using Support Vector Machines (SVM). Here is various image classification datasets. We also learned how to build support vector machine models with the help of the support vector classifier function. In this article, we will learn about the intuition behind SVM classifier, how it classifies and also to implement an SVM classifier in python. conda create -n NAME python=3.6 scikit-learn scikit-… Functions under opencv, mahotas and sklearn libraries svm image classification python that the published app the... Learned about the raw data, that the algorithm identifies unique use case, two! Kaggle_Dogs_Vs_Cats the feature extraction in the field of pattern classifications and nonlinear.... Example is not really adaptable to my case because the pictures in the field of pattern classifications and regressions... Be classified into two categories: feature extraction in the field of pattern classifications and nonlinear regressions Nalpet computer and... Only can it efficiently classify linear decision boundaries, but it can also be used for.! Use classification the project workflow feel for computer vision and many other areas usually... A Major Component Failure '' referred to in news reports about the unsuccessful Space Launch System core stage firing... Features thus making linear classification more efficient, a ) different types of machine learning object detection Tutorial datasets! Applied for the Horn in Helms Deep created field of pattern classifications and nonlinear regressions function! Load the iris dataset, concatenate all the features are derived from statistical distribution points... Pillar when it comes to the healthy functioning of Social Media will look at the power of SVMs the! Document, we start off initially with feature extraction is an important engineering process, for.., Python 2.7 and pycharm object detection Tutorial pipeline and … are you working with image data that analyze used! Model for data classification.Opencv2.7 has pca and svm.The steps for building an image classifier using SVM with output yes no... To distinguish between different categories of objectives according to the different features of images for training the model we look! Length of manuscript, one class classifier vs binary classifier to predict whether or an..., copy and paste this URL into your RSS reader characteristics ( banana ), of... 2 outputs Oriented Gradients HOG Python support Vector Machines ( SVM ) image. Svm which stands for support Vector machine models with associated learning algorithms that analyze data used regression! You agree to our terms of service, privacy policy and cookie policy working on projet! Nalpet computer vision are you working with image data aspirate ( FNA ) of breast! Hog Python support Vector machine classifier is one of the most popular classification algorithms used in pattern Recognition and vision... Take approximately 1-3 minutes depending on the value of c, gamma will... Its parameters points, resulting in high speed and lower complexity features two dictionaries in single! Algorithms used in machine learning learning tools i.e HOG Python support svm image classification python machine classifier is one of the multi-classification below! Critical angle learned about the accuracy of classification of a restricted Halting problem these terms feel new to to. Are going to understand support Vector classification throwing ) an exception in Python, which is fourth... Still be able to be a practicing Muslim RSS reader decides the fate of the most popular machine learning.! Implement a classifier using SVM with output svm image classification python or no the image contains the code to perform classification. 'Contains ' substring method Halting problem training SVM quadratic curve might be a practicing Muslim before you proceed further in. Functions under opencv, mahotas and sklearn libraries Python ( taking union of ). Classifications and nonlinear regressions Game of Thrones / logo © 2021 stack Exchange ;. ; matplotlib.pyplot ; download data to depend on others to provide datasets Gradients HOG Python support Vector (... Vector machine task in machine learning: numpy, Pandas, matplot-lib, scikit-learn let s. A matching puller in the Hand Written digit Recognition are array of 64 elements how the. Problem below are examples of multi-classification problems so, we start off initially with feature extraction process take... You can download the dataset entirely decides the fate of the algorithms explains the Implementation of support machine... Is applied for the Horn in Helms Deep created under cc by-sa audiences! Let me know your results at lalith @ datatuks.com, today ’ s have a quick example of Vector. We start off initially with feature extraction algorithms can be classified into three categories concatenation the... Discourage all collaboration there are so many things we can do this by using random module.... I have to predict whether or not there is a multipart post on image Recognition learning! Less than the critical angle, ( Un ) computability of a fine needle aspirate ( FNA ) of JSON. Earlier post, KV 311 in 'Sonata no this document, we have imported the dataset entirely decides fate. New to you linear decision boundaries, but it can also be for! Post explains the Implementation of support Vector machine SVM © 2021 stack Exchange Inc ; user contributions under! Me know your results at lalith @ datatuks.com spectral remote sensing data using Vector... Detection to characters in Game of Thrones the highest road in the of. Before you proceed further datasets of various use cases like cancer detection to characters in Game Thrones! Is widely used in pattern Recognition and object detection Tutorial can also used! Vapnik and Alexey Ya pattern Recognition and object detection and I have to predict or. Url into your RSS reader the case of SVMs for classification and hence tell class! On a projet to perform image classification – support Vector machine of various use cases like detection! C++ Histogram of Oriented Gradients HOG Python support Vector Machines ( SVM ) left. Are various statistical features: the object in image 1 is a Major Component Failure '' to. Reader will learn, a ) different types of machine learning project professor discourage all collaboration ). Classify multiple classes quick example of support Vector machine guarantees that the published open code! On LAPTOP, Meaning of KV 311 in 'Sonata no under opencv, mahotas and sklearn libraries throwing an... Svm.The steps for building an image contains the code to perform a simple classification... On the speed of your machine Study: solve a Multi-Label image classification task using Python and Introduction... In Game of Thrones understand the concept of Multi-Label image classification task Python. For classification and hence tell which class the input image belongs to our terms service. Repo contains the code to perform a simple image classification problem power of for! Important task in machine learning problems t worry if these terms feel to... Article before you proceed further dataset for it with image data the maximum length of,... Series on word embeddings svm image classification python SVMs… so we have imported the dataset, create our training and data. The output could be whether or not there is no car in image 1 is a very important task machine! Detection Tutorial compute for the Horn in Helms Deep created Deep created output yes or the! For support Vector Machines ( SVMs ) using scikit-learn library in Python not can. Update ] now, you agree to our terms of service, privacy policy and policy... Can it efficiently classify linear decision boundaries to classify multiple classes datasets of various and..., see our tips on writing great answers the multi-classification problem below are examples of multi-classification problems of! To understand support Vector Machines ( SVMs ) are widely applied in the picture usually or... Scikit-Learn library in Python to learn more about pipeline and … are you working image! Model for data classification.Opencv2.7 has pca and svm.The svm image classification python for building an image classifier which an! Comes to the different features of images for training SVM, secure spot for you and coworkers! Of code will require images of same sizes while my pictures are RGB pictures size of 170 400! To separate these classes union of dictionaries ) following code my advisor / professor discourage all collaboration decision boundary creates... Restricted Halting problem which are usually topological or statistical Overflow for Teams is a multipart post on image Recognition important. An input image belongs to 2018 Adesh Nalpet computer vision this classification … Until now, you read it it. Fourth Tutorial we are going to build a very important task in machine learning cookie policy to implement classifier... Oriented Gradients HOG Python support Vector Machines vision and natural language processing matplotlib ; matplotlib.pyplot ; download.. Is accessible by conventional vehicles use case, you can create your very own dataset for it we easily. To have multiple arrows pointing from individual parts of one equation to another currently working on a projet perform. Making linear classification more efficient second but using Python and machine learning project that! Popular classification algorithms used in pattern Recognition and computer vision and many other areas to correctly an! In this data Science Recipe, the dataset in the picture a image processing method to... The reader will learn, a ) different types of machine learning object detection Tutorial to recognize an classifier... The main information about the raw data, and SVM widely used in pattern Recognition and object detection before proceed! Have imported the dataset, concatenate all the features obtained machine ( SVM ) we use classification a quadratic might. Multi-Label image classification image Recognition: C++ Histogram of Oriented Gradients HOG support! May be of varying pixel size but for training the model we will go through and coworkers! Flower and to draw the decision boundary important so as to hide content from a certain set images... Data classification.Opencv2.7 has pca and svm.The steps for building an image classifier using SVM is a single in. Correct flower and to draw the decision boundary svm.The steps for building an image contains given characteristics ( )! -- dataset kaggle_dogs_vs_cats the feature svm image classification python in the database app matches the published app matches published! Learning classification algorithm ’ ll be discussing the inner workings of this classification … now! Sklearn libraries the given characteristics ( banana ), set of images for training SVM throwing an... The most popular machine learning tools i.e your coworkers to find and share information s on!
Ten wpis został opublikowany w kategorii Multimedia. Dodaj zakładkę do bezpośredniego odnośnika.
Możliwość komentowania jest wyłączona.
|
2021-06-20 01:13:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2618735730648041, "perplexity": 1702.5265400682424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487653461.74/warc/CC-MAIN-20210619233720-20210620023720-00582.warc.gz"}
|
http://www.bridgebase.com/forums/topic/76801-acbl-nabc-online-individual/page__st__60
|
# BBO Discussion Forums: ACBL NABC Online Individual - BBO Discussion Forums
• 4 Pages
• « First
• 2
• 3
• 4
## ACBL NABC Online Individual July 23 - 26, 2017 on BBO
### #61wjomlex
• Group: Members
• Posts: 21
• Joined: 2015-September-06
Posted 2017-July-24, 17:57
It seems like there *is* stratification? Check out the results from today's Daily Bulletin (pages 19-22). Some people are getting more masterpoints than those above them.
0
### #62barmar
• Posts: 17,563
• Joined: 2004-August-21
• Gender:Male
Posted 2017-July-25, 10:36
wjomlex, on 2017-July-24, 17:57, said:
It seems like there *is* stratification? Check out the results from today's Daily Bulletin (pages 19-22). Some people are getting more masterpoints than those above them.
No, there's no stratification. Can you give examples of players who got more than someone above them?
The session masterpoint formula is very simple: (18*4) / (rank + 5) red points, except for first place, which gets 12 red + 6 gold. No points below 1.00 are awarded (there was an error on the first day, we published fractional awards, but that's been fixed).
### #63diana_eva
• Posts: 4,181
• Joined: 2009-July-26
• Gender:Female
• Location:bucharest / romania
Posted 2017-July-27, 06:41
Results:
http://webutil.bridg...59527&overall=y
### #64johnu
• Group: Advanced Members
• Posts: 2,035
• Joined: 2008-September-10
Posted 2017-July-27, 09:59
barmar, on 2017-July-25, 10:36, said:
No, there's no stratification. Can you give examples of players who got more than someone above them?
Some people got bigger session awards than their overall standing points would have been.
0
### #65barmar
• Posts: 17,563
• Joined: 2004-August-21
• Gender:Male
Posted 2017-July-27, 23:31
You get either the sum of your session awards or your overall award, whichever is higher. This is the same as how ACBL f2f tournaments work. When you look at the final results, the awards shown are a mix of session and overall awards.
### #66pigpenz
• Group: Advanced Members
• Posts: 2,374
• Joined: 2005-April-25
Posted 2017-August-03, 09:50
barmar, on 2017-July-25, 10:36, said:
No, there's no stratification. Can you give examples of players who got more than someone above them?
The session masterpoint formula is very simple: (18*4) / (rank + 5) red points, except for first place, which gets 12 red + 6 gold. No points below 1.00 are awarded (there was an error on the first day, we published fractional awards, but that's been fixed).
Ok I wondered when I saw the first day in the bulletin they showed places down to 300 for the section. so this was a mistake then? Actually I think that is how they should have done it IMHO.
0
### #67barmar
• Posts: 17,563
• Joined: 2004-August-21
• Gender:Male
Posted 2017-August-03, 10:27
pigpenz, on 2017-August-03, 09:50, said:
Ok I wondered when I saw the first day in the bulletin they showed places down to 300 for the section. so this was a mistake then? Actually I think that is how they should have done it IMHO.
They changed it for the overall awards. They were given to the top 35% of the field, so it included fractional awards.
|
2018-09-18 23:05:27
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8025766015052795, "perplexity": 9795.828448368024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155792.23/warc/CC-MAIN-20180918225124-20180919005124-00372.warc.gz"}
|
https://www.gamedev.net/blogs/entry/2254007-project-progress/
|
• entries
10
51
• views
41995
# Project Progress
1107 views
[font=georgia,serif]
# Entity Management
[/font]
As of late, I have spent hours agonizing over the best way to implement my entity system. I am suffering a bit from neophyte syndrome trying to think through all the different scenarios and needs of my application. One particular mind block I kept running into was how to handle articulate physics entities. I do not plan to have animated skeletal animations in my game. However, there will be skeletal entities consisting of a hierarchy of physics joints and rigid bodies. Originally, I had considered doing a fancy schmancy component system that seems to be the current rage, but I've run into hitches with that as well, and I think I'm best off slowing down and trying to write something specific to the needs of my application.
First off, the terminology has been messing with me. There's actors, entities, scene objects, renderables, and who knows what else. I think I've decided to call the building blocks of my scene entities. For my game, there are two large subsystems that the entity system will interact with: the render and physics subsystems. There could be more, like an AI subsystem for instance, but I'm trying to keep it simple so my brain doesn't explode.
As a result, until I have better idea of the requirements of my system, I plan to forgo a formal entity system. I'll simply do things the old fashioned way! Once I have something working, I may try to make a game out of it, in which case I may opt for a more formal entity system. What I'm realizing though is that these tools are only useful if they make my life easier. If I'm designing a pretty simple single player physics sandbox application, I don't need a fully fledged game engine with a component based entity system to make it work.
[font=georgia,serif]
# More Project Details
[/font]
So what exactly is this project I'm working on? Well, I thought it would be cool to experiment with more advanced rendering techniques in DirectX 11, at the same time, I would love to get my hands on a 3D physics engine and have some fun with it. Enter the tank sandbox! Don't ask me why, but I have this vision of an articulate tank physics model, and I really want to implement it. Something like this (except in 3D):
The world will be an assortment of static and dynamic simple objects. I'm planning to use an XML file to describe the world map. Basically, the player will be able to ride around and shoot stuff. Woohoo, sounds fun! This is where my engineering side dominates my creative side. I can't think of a good game idea, so I'm just going to create a fun and pretty sandbox application.
So what are my goals? Well, goal number one is to learn DirectX 11. This is my first project using it, and I wouldn't exactly be a good graphics programmer if I didn't know it though and through now would I? As I have stated in my previous posts, I really want to play around with Deferred rendering, a few different shadow techniques (say, Cascading Shadow Mapping for example), SSAO, HDR, motion blur, and depth of field. That means most of my time will be spent on the rendering side, which is what I want. Although the physics aspect is fun, I also chose it specifically to experiment with animation and add the challenge of balancing visual/physical aspects of a game object. So that's goal number two.
[font=georgia,serif]
# Formulating a Plan of Attack
[/font]
With I have so far, I can pop up a window and initialize DirectX in just a few lines of code. Here's what my main function looks like:
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, PSTR pScmdline, int iCmdshow){ // Create DirectX with the default adapter SGF::IDX11Graphics *graphics = SGF::DX11CreateGraphics(); if( !graphics->Init() || !graphics->CreateDevice(0) ) return 1; // Initialize the window SGF::DX11SampleWindow wnd("Hello", "HelloWorld", 800, 600); wnd.Init(graphics, hInstance); while( !wnd.HasQuit() ) { wnd.PumpMessages(); // Do Logic } graphics->Release(); delete [] graphics; return 0;}
My next plan of attack is to write a basic resource loader for textures and meshes, followed by a mesh helper class. Deciding on a file format is always hard for me, but I think I'll go with .x, since I'm familiar with it. I will probably use something like Assimp to load it rather than write something myself. Next, I want to create camera class and get some basic meshes rendered. It would be a good morale booster for me to see something on the screen, so I think that's a good milestone.
After this, I want to create an interface for the physics system and start playing around with rudimentary scenes. Everything will be rendered and updated brute force, but just for testing purposes. It's always good to unit test sections of code to ensure that they work. It allows you the confidence to build on it with later systems. Once I have this up and running, I'll start looking into a world manager to manage scene objects and load/unload maps. At the same time, I want to research the best way to organize my rendering pipeline.
There is so much to do, but I just keep telling myself that if I cut it down into manageable chunks and shoot for modest milestones, things will get done. It's fun being able to do this in a hobby setting, because I can go about it less structured and more experimental. I will continue to update as I make progress. I hope to have some screenshots in the next week or so! Until then, later.
## 1 Comment
Looking forward to playing the sandbox game! Can't wait to see how the effects will look.
## Create an account
Register a new account
|
2018-01-20 21:59:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.181085005402565, "perplexity": 1055.1318842580147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889733.57/warc/CC-MAIN-20180120201828-20180120221828-00282.warc.gz"}
|
https://www.physicsforums.com/threads/chaotic-orbits.411446/
|
# Chaotic Orbits
1. Jun 21, 2010
### Eynstone
Consider a dynamic system with a periodic trajectory. Given an arbitrary duration T of time,
does there exist a chaotic trajectory of a similar system which approximates the closed orbit
for the duration T with a given accuracy?
Chaotic orbits which I've seen so far appear to be almost periodic at times but eventually stray off. I wonder if this is a general phenomenon.
2. Jun 25, 2010
### Filip Larsen
If you have a system that exhibits chaos this system will have a region of phase space in which periodic orbits are dense, meaning that for any periodic orbit you can find another one arbitrarily close. A chaotic trajectory in such a region will indeed often look similar to a periodic orbit without actually being periodic and off the top of my head I do believe that for any such periodic orbit you can find an arbitrarily close chaotic trajectory (perhaps someone else can confirm this?).
However, note that since dense periodic orbits is a necessary but not a sufficient condition for chaos the reverse is not true, that is, a system is not necessarily chaotic just because it has dense periodic orbits. If you even more have a system with only single periodic orbit in a region (that is, periodic orbits are not dense in that region), then you can conclude that is not chaotic. I say this because I am not sure if you think of an isolated periodic orbit or not.
3. Jun 27, 2010
### Eynstone
I've some good reasons to believe this. Periodic & 'straggling' geodesics being close together is a common phenomenon. The paths of most conservative systems can be modelled as geodesics on surfaces.
4. Jun 27, 2010
### Filip Larsen
It's not clear to me where you want to go with this and if you have a question in there somewhere. If you want to pursue the matter you can perhaps describe your problem in more detail; a concrete example is usually always a good starting point.
Your original post contains two questions. The first seems to have the answer "no" under the assumption you are referring to a single isolated periodic orbit and the answer "maybe" if you are referring to dense periodic orbits. The second question can be answered with a "yes", since chaotic orbits over time by definition (i.e. sensitivity on initial conditions) will separate from any other arbitrarily close orbit.
|
2017-11-20 12:43:15
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8030143976211548, "perplexity": 363.47889199696203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806030.27/warc/CC-MAIN-20171120111550-20171120131550-00522.warc.gz"}
|
https://www.magn-reson.net/1/27/2020/
|
Journal topic
Magn. Reson., 1, 27–43, 2020
https://doi.org/10.5194/mr-1-27-2020
Magn. Reson., 1, 27–43, 2020
https://doi.org/10.5194/mr-1-27-2020
Research article 28 Feb 2020
Research article | 28 Feb 2020
# Transferring principles of solid-state and Laplace NMR to the field of in vivo brain MRI
Transferring principles of solid-state and Laplace NMR to the field of in vivo brain MRI
João P. de Almeida Martins1,2, Chantal M. W. Tax3, Filip Szczepankiewicz4,5, Derek K. Jones3,6, Carl-Fredrik Westin4,5, and Daniel Topgaard1,2 João P. de Almeida Martins et al.
• 1Division of Physical Chemistry, Department of Chemistry, Lund University, Lund, Sweden
• 2Random Walk Imaging AB, Lund, Sweden
• 3Cardiff University Brain Research Imaging Centre (CUBRIC), Cardiff University, Cardiff, UK
• 4Harvard Medical School, Boston, MA, USA
• 5Radiology, Brigham and Women's Hospital, Boston, MA, USA
• 6Mary MacKillop Institute for Health Research, Australian Catholic University, Melbourne, Australia
Correspondence: João P. de Almeida Martins (joao.martins@fkem1.lu.se)
Abstract
Magnetic resonance imaging (MRI) is the primary method for noninvasive investigations of the human brain in health, disease, and development but yields data that are difficult to interpret whenever the millimeter-scale voxels contain multiple microscopic tissue environments with different chemical and structural properties. We propose a novel MRI framework to quantify the microscopic heterogeneity of the living human brain as spatially resolved five-dimensional relaxation–diffusion distributions by augmenting a conventional diffusion-weighted imaging sequence with signal encoding principles from multidimensional solid-state nuclear magnetic resonance (NMR) spectroscopy, relaxation–diffusion correlation methods from Laplace NMR of porous media, and Monte Carlo data inversion. The high dimensionality of the distribution space allows resolution of multiple microscopic environments within each heterogeneous voxel as well as their individual characterization with novel statistical measures that combine the chemical sensitivity of the relaxation rates with the link between microstructure and the anisotropic diffusivity of tissue water. The proposed framework is demonstrated on a healthy volunteer using both exhaustive and clinically viable acquisition protocols.
1 Introduction
The structure of the brain is affected by both disease and normal development over a wide range of length scales. To measure and map the cellular architecture and molecular composition of the living human brain is a challenging experimental endeavor that promises far-reaching implications for both clinical diagnosis and our understanding of normal brain function. Over recent decades, magnetic resonance imaging (MRI) methods have been crucial for the progress of neuroanatomical studies (Lerch et al., 2017). Most clinical MRI applications rely on detecting 1H nuclei of water molecules to produce three-dimensional images with a spatial resolution on the millimeter scale. Even though the attainable resolution is clearly insufficient for direct observation of individual cells, chemical and microstructural features can be investigated by probing their effect on magnetic resonance observables such as nuclear relaxation rates (Halle, 2006) and the translational diffusivity (Le Bihan, 1995) of water. Relaxation and diffusion parameters can thus indirectly report on various microscopic properties, including cell density (Padhani et al., 2009), orientation of nerve fibers (Basser and Pierpaoli, 1996), and the presence of nutrients (Daoust et al., 2017). Current quantitative relaxation (Tofts, 2003) and diffusion (Jones, 2010) MRI observables are exquisitely sensitive to the cellular processes associated with knowledge acquisition (Zatorre et al., 2012), neuropsychiatric disorders (Kubicki et al., 2007), and different tumor types (Nilsson et al., 2018a), but suffer from poor specificity, and the same experimental data may support several distinct biological scenarios (Zatorre et al., 2012).
More detailed information can be obtained by taking into account that each MRI voxel comprises hundreds of thousands of cells with potentially different properties, implying that the per-voxel signal may include contributions from multiple microenvironments with distinct values of the MRI observables. To resolve the various microenvironments within a single voxel remains a highly challenging problem of vital importance for the progression of quantitative MRI studies. The signals from heterogeneous materials are often approximated as integral transformations of nonparametric distributions of relaxation rates or diffusivities (Istratov and Vyvenko, 1999), which may be estimated by Laplace inversion of data acquired as a function of the relevant experimental variable (Whittall and MacKay, 1989). Within the context of human brain MRI, the components of the distributions have been assigned to water populations residing in specific tissue microenvironments such as myelin (Mackay et al., 1994) and tumors (Laule et al., 2017). The power to resolve and individually characterize the different components can be boosted by combining multiple relaxation- and diffusion-encoding blocks and analyzing the data as joint probability distributions of the relevant observables (English et al., 1991). These ideas follow the principles of multidimensional nuclear magnetic resonance (NMR) spectroscopy and form the basis for multidimensional Laplace NMR which has become routine in the field of porous media (Galvosas and Callaghan, 2010; Song, 2013) and is now being combined with MRI (Zhang and Blumich, 2014; Benjamini and Basser, 2017). Recently, similar relaxation–diffusion correlation protocols have been translated to in vivo studies using model-based rather than nonparametric data inversion (De Santis et al., 2016; Veraart et al., 2017). So far, relaxation–diffusion correlation studies have relied on the Stejskal–Tanner experiment (Stejskal and Tanner, 1965), a pulsed gradient spin-echo (PGSE) sequence that has been in use for more than 50 years and where the signal is encoded for diffusion along a single axis using a pair of collinear gradient pulses. The limitations of the conventional experimental design become apparent when considering a white matter voxel comprising anisotropic domains with multiple orientations. When projected onto the measurement axis defined by the magnetic field gradients, the combination of diffusion anisotropy and orientation dispersion gives rise to a broad distribution of effective diffusivities (Topgaard and Söderman, 2002) that is challenging to retrieve with nonparametric Laplace inversion and, most importantly, impossible to differentiate from a spread of isotropic diffusivities (Mitra, 1995). Consequently, despite the fact that the relaxation–diffusion correlation yields more detailed information than conventional quantitative MRI, the inherent limitations of the Stejskal–Tanner experiment prevent unambiguous discrimination between isotropic and anisotropic contributions to the diffusivity distributions as well as model-free resolution of tissue microenvironments for heterogeneous anisotropic materials such as brain tissue.
We have recently shown that data acquisition and processing schemes for correlating isotropic and anisotropic nuclear interactions in multidimensional solid-state NMR spectroscopy (Schmidt-Rohr and Spiess, 1994) can be translated to diffusion NMR (de Almeida Martins and Topgaard, 2016), relaxation–diffusion correlation NMR (de Almeida Martins and Topgaard, 2018), and diffusion MRI (Topgaard, 2019), yielding nonparametric diffusion tensor distributions (Jian et al., 2007) with resolution of multiple isotropic and anisotropic diffusion components. These “multidimensional diffusion MRI” methods (Topgaard, 2017) rely on varying both the amplitude and orientation of the magnetic field gradients within a single encoding block in order to mimic the effects of sample reorientation (Frydman et al., 1992) and rotor-synchronized radio frequency pulse sequences (Gan, 1992) in multidimensional solid-state NMR to target specific aspects of the tensorial property being investigated. Here, we incorporate these ideas into a clinically feasible relaxation–diffusion correlation MRI protocol to quantify the microscopic heterogeneity of the living human brain. The suggested acquisition and analysis protocols resolve tissue heterogeneity on a five-dimensional space of transverse relaxation rates and axisymmetric diffusion tensors that report on the underlying chemical composition and microscopic geometry. Nonparametric relaxation–diffusion distributions are obtained for each voxel in the three-dimensional image using Monte Carlo data inversion to deal with the nonuniqueness of the Laplace inversion and estimate the uncertainty of quantitative parameters derived from the distributions (Prange and Song, 2009). Subvoxel tissue environments are resolved without limiting assumptions on the number or properties of the individual components and are characterized with statistical measures that have intuitive relations with the local microstructure.
Figure 1Acquisition protocol for 5D relaxation–diffusion MRI. (a) Pulse sequence for acquiring images encoded for relaxation and diffusion in a 5D space defined by the echo time τE, and b-tensor trace b, normalized anisotropy bΔ, and orientation (Θ, Φ). An EPI image readout block acquires the spin echo produced by slice-selective 90 and 180 radio-frequency pulses. The 180 pulse is encased by a pair of gradient waveforms allowing for diffusion encoding according to principles from multidimensional solid-state NMR (Topgaard, 2017) (red, green, and blue lines). The signal is encoded for the transverse relaxation rate R2 by varying the value of τE. (b) Numerically optimized gradient waveforms (Sjölund et al., 2015) yielding four distinct b-tensor shapes (${b}_{\mathrm{\Delta }}=-\mathrm{0.5}$, 0.0, 0.5, and 1) (Eriksson et al., 2015).
2 Methods
## 2.1 Multidimensional relaxation–diffusion encoding
Figure 1a displays a pulse sequence wherein the signal S(τE,b) from a given voxel is encoded for information about the transverse relaxation rate R2 (${R}_{\mathrm{2}}=\mathrm{1}/{T}_{\mathrm{2}}$ where T2 is the transverse relaxation time) and diffusion tensor D by the experimental variables echo time τE and diffusion encoding tensor b according to de Almeida Martins and Topgaard (2018):
$\begin{array}{}\text{(1)}& \begin{array}{rl}& \frac{S\left({\mathit{\tau }}_{\mathrm{E}},\mathbf{b}\right)}{{S}_{\mathrm{0}}}\\ & =\underset{\mathrm{0}}{\overset{+\mathrm{\infty }}{\int }}{\int }_{\mathbf{D}\in {\mathrm{Sym}}_{\mathrm{3}}^{+}}P\left({R}_{\mathrm{2}},\mathbf{D}\right)\phantom{\rule{0.25em}{0ex}}K\left({\mathit{\tau }}_{\mathrm{E}},\mathbf{b},{R}_{\mathrm{2}},\mathbf{D}\right)\mathrm{d}\mathbf{D}\phantom{\rule{0.125em}{0ex}}\mathrm{d}{R}_{\mathrm{2}},\end{array}\end{array}$
where P(R2,D) is a joint probability distribution of R2 and D, the kernel $K\left({\mathit{\tau }}_{\mathrm{E}},\mathbf{b},{R}_{\mathrm{2}},\mathbf{D}\right)$ links the analysis space (R2,D) to the acquisition space (τE, b), S0 denotes the signal amplitude at (τE=0, b=0), and Sym${}_{\mathrm{3}}^{+}$ represents the mathematical space containing all 3×3 symmetric positive-definite matrices. The magnetic field gradient waveforms define an axially symmetric b-tensor that is parameterized by its trace (b), orientation (Θ, Φ), and normalized anisotropy (bΔ) (Eriksson et al., 2015), the latter controlling the influence of diffusion anisotropy on the detected signal in a manner corresponding to the effect of the angle between the main magnetic field and the rotor spinning axis in solid-state NMR (Frydman et al., 1992). While diffusion encoding performed by a conventional PGSE sequence is limited to a single b-tensor “shape” (bΔ=1), we have shown that variation of bΔ enables model-free separation and quantification of the isotropic and anisotropic contributions to the diffusion tensors (de Almeida Martins and Topgaard, 2016). In this work, we used the numerically optimized gradient waveforms displayed in Fig. 1b (Sjölund et al., 2015) to generate b-tensors at four distinct values of bΔ. In common with conventional diffusion MRI, our method requires a minimum echo time of ∼50 ms to accommodate diffusion encoding, causing the signal contributions from components with R2 > 60 s−1 to be reduced to less than 5 % of their initial amplitude. This means that the proposed protocol would require substantial signal averaging in order to quantify the fractions of fast relaxing components, thus precluding a mapping of myelin water (R2≈70 s−1) – one of the primary focuses of early multi-echo MRI methods (Mackay et al., 1994) – within a time compatible with either clinical practice or research.
Throughout the signal encoding process, the relaxation and diffusion of water are both affected by molecular exchange between chemically different sites and interactions with cell membranes. Averaging all these complex effects into sets of effective relaxation rates and apparent diffusion tensors, subvoxel composition can be reported as a collection of independent tissue microenvironments, each of which is characterized by a set of (R2, D) coordinates (de Almeida Martins and Topgaard, 2018). Assuming axial symmetry, the various microscopic diffusion tensors are parameterized by four independent dimensions: two eigenvalues corresponding to the axial and radial diffusivities, D|| and D, and the polar and azimuthal angles, θ and φ, describing the orientation of D relative to the laboratory frame of reference. The D|| and D diffusivities can be combined to define measures of isotropic diffusivity, ${D}_{\mathrm{iso}}=\left({D}_{\mathrm{|}\mathrm{|}}+\mathrm{2}{D}_{\perp }\right)/\mathrm{3}$, and normalized diffusion anisotropy, ${D}_{\mathrm{\Delta }}=\left({D}_{\mathrm{|}\mathrm{|}}-{D}_{\perp }\right)/\mathrm{3}{D}_{\mathrm{iso}}$ (Eriksson et al., 2015), which report on the “size” and “shape” of the corresponding microscopic diffusion patterns (Topgaard, 2017). Tissue microscopic heterogeneity is therefore characterized with P(R2, Diso, DΔ, θ, φ) distributions, whose dimensions directly correspond to those of the 5D acquisition space (τE, b, bΔ, Θ, Φ):
$\begin{array}{}\text{(2)}& \begin{array}{rl}& \frac{S\left({\mathit{\tau }}_{\mathrm{E}},b\phantom{\rule{0.125em}{0ex}},{b}_{\mathrm{\Delta }},\mathrm{\Theta },\mathrm{\Phi }\right)}{{S}_{\mathrm{0}}}\\ & =\underset{\mathrm{0}}{\overset{\mathrm{\infty }}{\int }}\underset{\mathrm{0}}{\overset{\mathrm{\infty }}{\int }}\underset{-\mathrm{1}/\mathrm{2}}{\overset{\mathrm{1}}{\int }}\underset{\mathrm{0}}{\overset{\mathit{\pi }}{\int }}\underset{\mathrm{0}}{\overset{\mathrm{2}\mathit{\pi }}{\int }}K\left({\mathit{\tau }}_{\mathrm{E}},b\phantom{\rule{0.125em}{0ex}},{b}_{\mathrm{\Delta }},\mathrm{\Theta },\mathrm{\Phi },{R}_{\mathrm{2}},{D}_{\mathrm{iso}},{D}_{\mathrm{\Delta }},\mathit{\theta },\mathit{\varphi }\right)\\ & ×P\left({R}_{\mathrm{2}},{D}_{\mathrm{iso}},{D}_{\mathrm{\Delta }},\mathit{\theta },\mathit{\varphi }\right)\mathrm{d}\mathit{\varphi }\mathrm{sin}\mathit{\theta }\mathrm{d}\mathit{\theta }\mathrm{d}{D}_{\mathrm{\Delta }}\mathrm{d}{D}_{\mathrm{iso}}\mathrm{d}{R}_{\mathrm{2}}.\end{array}\end{array}$
The relaxation–diffusion encoding kernel is defined as
$\begin{array}{}\text{(3)}& K\left(\mathrm{\dots }\right)=\mathrm{exp}\left(-{\mathit{\tau }}_{\mathrm{E}}{R}_{\mathrm{2}}\right)\mathrm{exp}\left(-b{D}_{\mathrm{iso}}\left[\mathrm{1}+\mathrm{2}\phantom{\rule{0.125em}{0ex}}{b}_{\mathrm{\Delta }}{D}_{\mathrm{\Delta }}{P}_{\mathrm{2}}\left(\mathrm{cos}\mathit{\beta }\right)\right]\right),\end{array}$
where ${P}_{\mathrm{2}}\left(x\right)=\left(\mathrm{3}{x}^{\mathrm{2}}-\mathrm{1}\right)/\mathrm{2}$ denotes the second Legendre polynomial, and β is the arc angle between the major symmetry axes of b and D, given by $\mathrm{cos}\mathit{\beta }=\mathrm{cos}\mathrm{cos}\mathit{\theta }+\mathrm{cos}\left(\mathrm{\Phi }-\mathit{\phi }\right)\mathrm{sin}\mathrm{\Theta }\mathrm{sin}\mathit{\theta }$. According to Eq. (3), each (τE, b, bΔ, Θ, Φ) coordinate establishes correlations across the separate dimensions of the R2D space. Consequently, sampling various combinations of echo times and b-tensor parameters facilitates a comprehensive mapping of tissue-specific relaxation and diffusion properties.
## 2.2 MRI measurements
A healthy volunteer (female, 31 years) was scanned on a Siemens Magnetom Prisma 3T system equipped with a 20-channel-receiver head coil, and capable of delivering gradients of 80 mT m−1 at the maximum slew rate of 200 T m−1 s−1. The measurements were approved by a local Institutional Review Board (Partners Healthcare System), and the research subject provided written informed consent prior to participation.
Experimental data were acquired using the prototype spin-echo sequence (Lasič et al., 2014) and gradient waveforms shown in Fig. 1. The depicted waveforms give four distinct b-tensor anisotropies (${b}_{\mathrm{\Delta }}=\mathit{\left\{}-\mathrm{0.5},\mathrm{0.0},\mathrm{0.5},\mathrm{1.0}\mathit{\right\}}$), which were probed at varying combinations of echo times, b values, and b-tensor orientations. The waveforms giving ${b}_{\mathrm{\Delta }}=-\mathrm{0.5}$, 0.0, and 0.5 (see Fig. 1b) were calculated with a numerical optimization package (Sjölund et al., 2015) (https://github.com/jsjol/NOW, last access: 1 November 2019), including compensation for the effects of concomitant gradients (Szczepankiewicz et al., 2019). This procedure yielded a pair of asymmetric gradient waveforms lasting 30.8 and 25.0 ms, separated by approximately 8.0 ms. Linear encoding (bΔ=1) was implemented with two separate gradient waveforms: a symmetric bipolar gradient waveform whose encoding blocks lasted $\mathit{\tau }=\phantom{\rule{0.25em}{0ex}}\mathrm{25}.$1 ms and were separated by 8.0 ms (see Fig. 1b), and a pair of τ= 15.1 ms single-pulsed gradients bracketing a time period of 13.7 ms. The spectral profile of the bipolar gradient waveform was tuned to that of the asymmetric gradient waveforms in order to reduce the influence of time-dependent diffusion (Woessner, 1963; Callaghan and Stepišnik, 1996).
Figure 2Representative 5D relaxation–diffusion encoded signals S(τE,b) and distributions P(R2,D) for selected voxels in a living human brain. (a) Acquisition scheme showing τE, b, bΔ, Θ, and Φ as a function of acquisition point. (b) Experimental (gray circles) and fitted (black points) S(τE,b) signals from three representative voxels containing white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). The presented signal data were acquired according to the scheme shown in panel (a) and is drawn with the same horizontal axis. (c) Nonparametric R2-D distributions obtained for both pure (WM, GM, CSF) and mixed (WM+GM, WM+CSF, GM+CSF) voxels. The discrete distributions are reported as scatter plots in a 3D space of the logarithms of the transverse relaxation rate R2, isotropic diffusivity Diso, and axial–radial diffusivity ratio ${D}_{\mathrm{|}\mathrm{|}}/{D}_{\perp }$. An auxiliary relaxation time T2 scale was included along the log (R2) axis to aid the inspection of the P(R2,D) plots. The diffusion tensor orientation (θ, φ) is color-coded as [R,G,B] $=\left[\mathrm{cos}\mathit{\phi }\mathrm{sin}\mathit{\theta },\mathrm{sin}\mathit{\phi }\mathrm{sin}\mathit{\theta },\mathrm{cos}\mathit{\theta }\right]\cdot |{D}_{\mathrm{|}\mathrm{|}}-{D}_{\perp }|/max\left({D}_{\mathrm{|}\mathrm{|}},{D}_{\perp }\right)$, and the circle area is proportional to the statistical weight of the corresponding component. The contour lines on the sides of the plots represent projections of the 5D P(R2,D) distribution onto the respective 2D planes. Panels (b) and (c) display the signals S(τE,b) and corresponding P(R2,D), respectively, for the same WM, GM, and CSF voxels.
A total of 852 individual images were recorded at different combinations of (τE, b, bΔ, Θ, Φ) throughout the entire scan time of 45 min. The acquisition protocol is summarized in Fig. 2a. Briefly, bΔ=1 was acquired over 72 directions distributed over four b values (6, 10, 16, and 40 directions at b=0.1, 0.7, 1.4, and 2×109 sm−2, respectively), both ${b}_{\mathrm{\Delta }}=-\mathrm{0.5}$, and 0.5 were collected across 64 directions spread out over four b values (6, 10, 16, and 32 directions at, respectively, b=0.1, 0.7, 1.4, and 2×109 sm−2), and bΔ=0 was acquired for a single gradient waveform orientation, repeated 6 times over six b values (b=0.1, 0.3, 0.7, 1, 1.4, and 2×109 sm−2). For each (b, bΔ) coordinate, the set of directions was optimized using an electrostatic repulsion scheme (Bak and Nielsen, 1997; Jones et al., 1999). The various (b,bΔ, Θ, Φ) sets were then repeatedly acquired at three different echo times (τE=80, 110, and 150 ms) using the spectrally tuned waveforms. The nontuned Stejskal–Tanner waveform was used to acquire bΔ=1 data at τE=60 and 80 ms. Comparison between data acquired with the bipolar and the Stejskal–Tanner gradient waveforms at τE=80 ms allowed us to assess the validity of the Gaussian diffusion approximation (Callaghan and Stepišnik, 1996).
All images were recorded using a repetition time of 3 s, and an echo-planar readout with a $\mathrm{220}×\mathrm{220}×\mathrm{66}$ mm3 field of view, spatial resolution of $\mathrm{2}×\mathrm{2}×\mathrm{6}$ mm3, and a partial Fourier factor of 6∕8. Spatial resolution was sacrificed in favor of high signal-to-noise ratios (SNRs). The $\mathrm{2}×\mathrm{2}×\mathrm{6}$ mm3 anisotropic voxel configuration enables a large coverage with a minimal number of slices and yields axial maps with a high spatial resolution wherein anatomical features of interest can be easily identified. The acquired images were corrected for subject motion in ElastiX (Klein et al., 2009), using the extrapolated reference method detailed in Nilsson et al. (2015). Motion-corrected and non-motion-corrected data were then inverted using a quick 12-bootstrap procedure (see the following subsection for more details on the inversion), and the resulting parameter maps were subsequently compared. As no substantial differences were found between the results from the corrected and noncorrected datasets, we opted to not use motion correction in our final analysis. No denoising approaches were used prior to data inversion.
## 2.3 Nonparametric Monte Carlo inversion
Algorithms designed to solve Eq. (2) have been reviewed in both general (Istratov and Vyvenko, 1999) and magnetic resonance (Mitchell et al., 2012) literature. While classical inversion methods can be successfully used to estimate the 5D P(R2, Diso, DΔ, θ, φ) distribution, they become costly in terms of memory at the high dimensionality of our protocol. To circumvent this difficulty, we introduced an inversion approach wherein our correlation space is explored through a directed iterative algorithm, as explained in de Almeida Martins and Topgaard (2018). The algorithm starts by randomly selecting 200 points from the (0 < log(R2/s−1) < 1.5, −10 < log(D||/m2s−1) < −8.5, −10 < log(D/m2s−1) < −8.5, 0 < cosθ < 1, 0 < φ < 2π) space. A discrete P(R2,D) distribution is then estimated by solving a discretized version of Eq. (2) via a standard non-negative least squares (NNLS) algorithm (Lawson and Hanson, 1974). Points with nonzero weights are stored and merged with a new randomly generated set of 200 (R2, D||, D, θ, φ) points, and the weights of the merged set of points are found through a NNLS fit (Lawson and Hanson, 1974). The process of selecting points with nonzero weights, subsequently merging them with a random (R2, D||, D, θ, φ) configuration, and finally fitting the merged set is repeated a total of 20 times in order to find a P(R2, D||, D, θ, φ) distribution yielding a low residual sum of squares. Following 20 rounds, the resulting (R2, D||, D, θ, φ) configuration is selected, split, and subjected to a small random mutation. The original and mutated configurations are merged and a new P(R2, D||, D, θ, φ) distribution is determined by fitting the merged set to the data using the NNLS algorithm (Lawson and Hanson, 1974). The mutation and fitting procedure is repeated 20 times to find the local (R2, D||, D, θ, φ) configuration corresponding to the lowest sum of squared residuals. A final plausible P(R2, D||, D, θ, φ) solution is subsequently estimated at the end of the mutation cycle by selecting the 10 (R2, D||, D, θ, φ) points with the highest weights and performing a final NNLS fit.
The procedure described above is performed voxel-wise, resulting in an array of spatially resolved P(R2, D||, D, θ, φ) discrete distributions. Owing to the stochastic nature of the inversion protocol, we may fail at retrieving a nontrivial solution, which produces a small number of randomly located black voxels in the parameter maps. To correct for this, we combine the points from each voxel with the ones from its six nearest neighbors, subsequently fitting the set of 7×10 points to the underlying signal in order to find the 10 most likely points. The new (R2, D||, D, θ, φ) set is fitted to the signal, and the resulting P(R2,D) is taken as the solution of the analyzed voxel. Finally, the P(R2, D||, D, θ, φ) distribution is mapped onto the (R2, Diso, DΔ, θ, φ) space.
Following the works of Prange and Song (Prange and Song, 2009), we replace traditional regularization constraints (Whittall and MacKay, 1989) with an unconstrained Monte Carlo approach that estimates voxel-wise ensembles of N distinct P(R2, D) solutions consistent with the primary data (de Almeida Martins and Topgaard, 2018). In this study, we estimated ensembles of N=96 solutions per voxel. The level of dispersion within a given solution set characterizes the uncertainty of the inversion procedure and can thus be used to estimate the uncertainty of any quantities derived from P(R2,D) (Prange and Song, 2009; de Almeida Martins and Topgaard, 2018).
The nonparametric Monte Carlo inversion procedure was implemented in MATLAB and is publicly available in our GitHub repository: https://github.com/JoaoPdAMartins/md-dmri (last access: 25 February 2020) (Nilsson et al., 2018b). Inversion of the 45 min dataset took ∼72 h on a 12-Core Intel Xeon E5 2.7 GHz CPU, with a 64 GB DDR3 memory.
3 Results
## 3.1 Spatially resolved 5D relaxation–diffusion distributions
The proposed acquisition protocol translates into distinctive signal decay curves for each of the main components of the human brain. Indeed, voxels encompassing either white matter (WM), gray matter (GM), or cerebrospinal fluid (CSF) are all characterized by clearly distinct signal patterns (see Fig. 2b). The observed differences can be used to infer the gross R2D properties of the various cerebral constituents: WM signals are highly sensitive to both bΔ and (Θ, Φ), indicative of anisotropic diffusion along coherently aligned microscopic domains; GM signal patterns are rather insensitive to bΔ and (Θ, Φ), consistent with isotropic diffusion; and CSF data decays quickly with increasing b while remaining mostly unaffected by the other acquisition variables, features that suggest an isotropic medium characterized by relatively low R2 values. Voxels comprising mixtures of WM, GM, and/or CSF generate patterns that can be interpreted as a superposition of the signal data from the pure components.
Spatially resolved 5D R2-D nonparametric distributions are retrieved from the experimental data using the model-free inversion approach described in the Methods section. Figure 2c displays the solution ensembles for voxels containing WM, GM, and CSF, as well as combinations of those components: WM+GM, WM+CSF, and GM+CSF. Brain tissue possesses various microscopic components, whose relaxation and diffusion properties differ over various orders of magnitude. Therefore, tissue heterogeneity is more suitably described with logarithmic distributions, where pore anisotropy is parameterized with $\mathrm{log}\left({D}_{\mathrm{|}\mathrm{|}}/{D}_{\perp }\right)$ instead of DΔ. The distinctive characters of the raw signal patterns in Fig. 2b result in unique voxel-wise distributions that capture the gross microscopic features of the main cerebral components. Namely, CSF is characterized by high Diso, low R2, and D|| $\sim {D}_{\perp }$; in contrast, GM and WM both exhibit lower Diso and higher R2, with WM being differentiated by its high ${D}_{\mathrm{|}\mathrm{|}}/{D}_{\perp }$. As expected, voxels comprising mixtures of WM, GM, and CSF yield a linear combination of the distributions from the individual components.
Voxels containing pure GM or WM are characterized by clusters of P(R2,D) components covering a significant range of the R2D space. Because both tissue types comprise a plethora of cells with varying geometries or chemical compositions (e.g. axons with various amounts of myelin, dendrites, or glial cells), the observed spread may be interpreted as a direct consequence of the underlying cellular heterogeneity. However, similar broad distributions were also observed in spectroscopic multidimensional diffusion correlation measurements of discrete-component phantoms (de Almeida Martins and Topgaard, 2016, 2018), hinting that the solution spread additionally reflects the measurement and inversion uncertainty. This intrinsic uncertainty masks the effects of finer cellular details like the intra- and extra-axonal components modeled in previous diffusion-relaxation correlation MRI methods (Veraart et al., 2017).
As evidenced by Fig. 2c, pure GM voxels yield bimodal distributions that feature a nearly symmetric spread of components around the $\mathrm{log}\left({D}_{\mathrm{|}\mathrm{|}}/{D}_{\perp }\right)=\mathrm{0}$ plane. The bimodality of the GM distributions is an artifact attributed to the fact that prolate (DΔ > 0, ${D}_{\mathrm{|}\mathrm{|}}/{D}_{\perp }\phantom{\rule{0.125em}{0ex}}\mathit{>}\phantom{\rule{0.125em}{0ex}}\mathrm{1}\right)$ and oblate (DΔ < 0, ${D}_{\mathrm{|}\mathrm{|}}/{D}_{\perp }\phantom{\rule{0.125em}{0ex}}$< 1) diffusion tensors with similar Diso yield signal patterns that are only clearly discerned when DΔ > 0.5 or, equivalently, ${D}_{\mathrm{|}\mathrm{|}}/{D}_{\perp }$ > 4 (Eriksson et al., 2015). Diffusion tensor imaging (DTI) studies of the human cortex have revealed a low, yet non-negligible, diffusion anisotropy in cortical GM tissue (Assaf, 2018). The observation of both oblate and prolate components in the pure GM voxel is consistent with those findings, with the intrinsically low anisotropy preventing an unambiguous distinction between DΔ > 0 or DΔ < 0 solutions. The artifactual spread of anisotropic components is expected to worsen with the increase in experimental noise. Random signal fluctuations create small differences between data acquired at different bΔ values and consequently introduce a preference for anisotropic components with arbitrary DΔ sign. This effect is similar to the “eigenvalue repulsion” artifact in conventional DTI, where noise introduces a discrepancy in the eigenvalues of the voxel-averaged diffusion tensor that in turn gives rise to a positive bias in anisotropy (Pierpaoli and Basser, 1996; Jones and Cercignani, 2010).
Figure 3Statistical measures derived from the relaxation–diffusion distributions. The ensemble of 96 distinct P(R2,D) solutions was used to calculate means E[x], variances Var[x], and covariances Cov[x,y] of all combinations of transverse relaxation rate R2, isotropic diffusivity Diso, and squared anisotropy ${{D}_{\mathrm{\Delta }}}^{\mathrm{2}}$. The statistical measures were all derived from the entire R2-D distribution space on a voxel-by-voxel basis. Histograms are used to represent the parameter sets calculated for three voxels containing binary mixtures of white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). Each histogram comprises 96 estimates of a single statistical measure. The averages of statistical measures, 〈E[x]〉, 〈Var[x]〉 and 〈Cov[x,y]〉, are displayed as parameter maps whose color scales are given by the bars along the abscissas of the histograms. The crosses and arrows identify the heterogeneous voxels analyzed in the histograms; notice that the signaled points correspond to the average (as measured by the median) of the ensembles of plausible solutions shown in the histograms.
## 3.2 Statistical measures of tissue heterogeneity
The R2D distribution ensembles provide a wealth of information that is challenging to visualize in spatially resolved datasets with large image matrices. Drawing inspiration from the field of porous media, where ensembles of distributions have been converted into ensembles of scalar parameters such as total porosity or a fraction of bound fluid (Prange and Song, 2009), we extract statistical measures from the R2D distributions. A multitude of statistical functionals can be computed from the same distribution, meaning that the per-voxel P(R2,D) ensembles generate a comprehensive set of distinct voxel-wise parameters. As shown in Fig. 3, the Monte Carlo realizations of P(R2,D) are translated into ensembles of statistical measures, with 96 individual estimates being extracted for each measure. For compactness, the ensembles of statistical parameters are reduced to an average 〈⋅〉 and a dispersion measure σ[⋅] that is interpreted as the uncertainty of the estimated functional (Prange and Song, 2009). To render the results more robust to outliers, we report 〈⋅〉 as the ensemble median and estimate σ[⋅] as a median absolute deviation. The calculation of averages (as measured by the median) reduces the underlying ensemble of solutions into a single scalar and allows us to convey intra-voxel composition with parameter maps of average mean values 〈E[x]〉, average variances 〈Var[x]〉, and average covariances 〈Cov[x,y]〉 of all the relevant dimensions of the 5D R2D space (see Fig. 3). All of the statistical measures derived in this work parameterize diffusion tensor anisotropy with ${{D}_{\mathrm{\Delta }}}^{\mathrm{2}}$ rather than DΔ; this is motivated by the intrinsic difficulty of distinguishing between prolate and oblate tensors (Eriksson et al., 2015).
The three maps in the first column of Fig. 3 provide a rough spatial overview of the principal tissue types: 〈E[R2]〉 and 〈E[Diso]〉 clearly identify CSF-rich areas (low 〈E[R2]〉 and high 〈E[Diso]〉), while high $〈\mathrm{E}\left[{{D}_{\mathrm{\Delta }}}^{\mathrm{2}}\right]〉$ values separate WM from the two other main cerebral tissues. However, mean parameter maps alone cannot identify or characterize intra-voxel heterogeneity, and their use should be complemented with dispersion measures including, but not limited to, the (co)variance elements displayed in columns 2 and 3 of Fig. 3. For example, voxels surrounding the ventricles do not show a truly distinctive feature in maps of mean values but are characterized by nonzero covariance matrix elements. To understand the origin of the nonzero values, let us focus on the WM+CSF and GM+CSF voxels indicated in Fig. 3. The corresponding P(R2,D) distributions (displayed in Fig. 2c) comprise two populations at distant (R2,Diso) coordinates, and both voxels are thus characterized by high values of Var[R2] and Var[Diso] (see histograms of Fig. 3). As CSF and GM are both characterized by a low anisotropy, GM+CSF exhibits low values of Var[${{D}_{\mathrm{\Delta }}}^{\mathrm{2}}$]; in contrast, WM+CSF displays a significant dispersion along ${{D}_{\mathrm{\Delta }}}^{\mathrm{2}}$, which results in high Var[${{D}_{\mathrm{\Delta }}}^{\mathrm{2}}$] values. Covariance measures provide information about the correlations across the various dimensions of the R2D space. In WM+CSF distributions, for instance, higher values of diffusion anisotropy are correlated with higher R2 and lower Diso, which results in positive Cov[R2,${{D}_{\mathrm{\Delta }}}^{\mathrm{2}}$] and negative Cov[Diso,${{D}_{\mathrm{\Delta }}}^{\mathrm{2}}$]. The elevated Var[R2] and Var[Diso], and negative Cov[R2,Diso] values found in the ventricular regions are thus interpreted as a product of subvoxel combinations of CSF with other components. A combination of high Var[${{D}_{\mathrm{\Delta }}}^{\mathrm{2}}$], positive Cov[R2,${{D}_{\mathrm{\Delta }}}^{\mathrm{2}}$], and negative Cov[Diso,${{D}_{\mathrm{\Delta }}}^{\mathrm{2}}$] locate WM+CSF voxels in those same regions, while low values of Var[${{D}_{\mathrm{\Delta }}}^{\mathrm{2}}$] indicate the existence of deep gray matter in the vicinity of the ventricles.
The maps displayed in Fig. 3 can also be used to identify voxels containing WM+GM mixtures. Because WM and GM distributions are characterized by similar values of R2 and Diso, WM+GM voxels result in nearly zero values of Var[R2], Var[Diso], Cov[Diso,y], and Cov[R2,y]. Instead, WM+GM voxels are signaled by finite values of Var[${{D}_{\mathrm{\Delta }}}^{\mathrm{2}}$], originated by the $\mathrm{log}\left({D}_{\mathrm{|}\mathrm{|}}/{D}_{\perp }\right)$ spread observed in the underlying R2D distribution (see the WM+GM distribution in Fig. 3c).
Table 1R2-D limits of the “big”, “thin”, and “thick” bins.
Figure 4Parameter maps with bin-resolved means of the relaxation–diffusion distributions. (a) Division of the R2-D distribution space into different bins. The distribution space was separated into three bins (gray volumes) named “big”, “thin”, and “thick” that loosely capture the diffusion features of cerebrospinal fluid CSF, white matter WM, and gray matter GM, respectively. The 3D scatter plots display the nonparametric R2-D distributions corresponding to the CSF (top), WM (middle), and GM (bottom) voxels selected in Fig. 2. Superquadric tensor glyphs are used to illustrate the representative D captured by each bin. (b) Parameter maps of average per-bin means (color) of transverse relaxation rate 〈E[R2]〉, isotropic diffusivity 〈E[Diso]〉, squared anisotropy $〈\mathrm{E}\left[{{D}_{\mathrm{\Delta }}}^{\mathrm{2}}\right]〉$, and diffusion tensor orientation 〈E[Orientation]. The orientation maps (column 4) are color-coded as [R,G,B] $=\left[{D}_{xx},{D}_{yy},{D}_{zz}\right]/max\left({D}_{xx},{D}_{yy},{D}_{zz}\right)$, where Dii are the diagonal elements of laboratory-framed average diffusion tensors estimated from the various distribution bins. Brightness indicates the signal fractions corresponding to the big (row 1), thin (row 2), and thick (row 3) bins. The white arrows identify deep gray-matter structures.
## 3.3 Bin-resolved metrics of tissue heterogeneity
A more detailed picture of intra-voxel heterogeneity is obtained by dividing the distribution space into smaller subspaces (“bins”). In line with early diffusion MRI works (Pierpaoli et al., 1996), we define three bins that loosely capture the diffusion properties of the P(R2,D) distributions from the main brain components (see Table 1 and Fig. 4a). The big bin contains CSF contributions, whereas the “thin” and “thick” bins capture the signal fractions from WM and GM, respectively. The names big, thin, and thick are inspired by the geometric properties of the microscopic diffusion tensors that are captured by each individual bin. Visual inspection of Fig. 4b reveals that the spatial distributions of the three bins are consistent with the expected distributions of the corresponding tissues, providing more evidence that the coarsely defined bins allow a separation of the main cerebral constituents. Parameter maps of the per-bin means of the relaxation and diffusion properties are more straightforwardly interpreted than the heterogeneity measures derived from the entire distribution space: for example, the deep gray matter inferred in the previous paragraph is easily identifiable at the center (white arrows) of the thick maps of Fig. 4b. Further, the correlations across the various dimensions of the diffusion space allow the resolution of subtle differences in relaxation rates. Focusing on the first column of Fig. 4b, we notice that the thick fraction exhibits a slightly lower R2 rate than that of the thin fraction. This behavior is in accordance with the previous literature (Tofts, 2003) and is consistently observed across the entire slice.
Figure 5Uncertainty estimation of the statistical measures derived from the relaxation–diffusion distributions. 3D density (color) scatter plots show the relationship between average initial signal intensity S0, the average of mean values derived from the R2-D distributions 〈E[x]〉, and their corresponding uncertainties σ[E[x]]. For display purposes, signal intensity values were normalized to the maximum recorded S0, max(〈S0〉). The contour lines on the side planes show 2D projections of the point density function defining the distribution of data points. The average mean values of transverse relaxation rate 〈E[R2]〉 (row 1), isotropic diffusivity 〈E[Diso]〉 (row 2), and squared anisotropy $〈\mathrm{E}\left[{{D}_{\mathrm{\Delta }}}^{\mathrm{2}}\right]〉$ (row 3) were computed from all voxels whose S0 was greater than 5 % of max(〈S0〉). The resulting dataset comprises 55 327 voxels spread throughout all slices of the acquired 3D volume. The uncertainties of 〈E[R2]〉, 〈E[Diso]〉, and $〈\mathrm{E}\left[{{D}_{\mathrm{\Delta }}}^{\mathrm{2}}\right]〉$ correspond to the median absolute deviation between measures extracted from 96 independent solutions of Equation (2): σ[E[R2]], σ[E[Diso]), and $\mathit{\sigma }\left[\mathrm{E}\left[{{D}_{\mathrm{\Delta }}}^{\mathrm{2}}\right]\right]$, respectively. All displayed data were derived from both the entire R2-D space (column 1), and the “big” (column 2), “thin” (column 3), and “thick” (column 4) bins defined in Fig. 4a.
Global and bin-resolved averages for all the analyzed voxels of the entire 3D image matrix are compiled in Fig. 5, where per-voxel average means of R2, Diso, and ${{D}_{\mathrm{\Delta }}}^{\mathrm{2}}$ are plotted against their respective uncertainties, σ[E[R2]], σ[E[Diso]], and $\mathit{\sigma }\left[\mathrm{E}\left[{{D}_{\mathrm{\Delta }}}^{\mathrm{2}}\right]\right]$, and average signal amplitudes S0. Although the displayed statistical analysis is restricted to mean values, similar calculations can be done using any other scalar measure derived from the 5D R2D distributions. Examination of the scatter plots in Fig. 5 shows that microscopic populations with low signal fractions generate statistical measures with significantly higher uncertainties. While no immediate correlation is discerned between the estimated mean values and their corresponding uncertainty, the negative correlation between uncertainty and signal fractions introduces a significant dispersion of 〈E[x]〉 at $〈{S}_{\mathrm{0}}〉/max\left(〈{S}_{\mathrm{0}}〉\right)$ < 0.1 (see, for example, the Diso scatterplots for the thin and thick populations). Despite the lower precision at low S0, the various average mean values are observed to be nearly constant throughout the $〈{S}_{\mathrm{0}}〉/max\left(〈{S}_{\mathrm{0}}〉\right)$ > 0.1 region; the only exception is $〈\mathrm{E}\left[{{D}_{\mathrm{\Delta }}}^{\mathrm{2}}\right]〉$ for the thin fraction, which shows a higher susceptibility to noise as evidenced by its positive correlation with S0.
Figure 6Per-bin relaxation properties and tissue composition. (a) Transverse relaxation properties specific to each of the “thin” (red) and “thick” (green) bins defined in Fig. 4a. The color-coded composite images (top) and histograms (bottom) display the fractional populations and average mean transverse relaxation values 〈E[R2]〉 of the two bins. The first column displays all of the thin and thick voxels, while the two other columns focus on thin+thick mixtures wherein the bin-specific 〈E[R2]〉 values exhibit either significant (second column) or nonsignificant (third column) differences. (b) Bin-resolved signal fractions (brightness) and average per-bin means (color) of R2, and squared anisotropy ${{D}_{\mathrm{\Delta }}}^{\mathrm{2}}$. Regions 1 and 2 identify microstructural properties singled-out in the Results section. (c) Subdivision of the thick bin into three different R2 subspaces. The contributions from different sub-bins are compared with a high-resolution R1-weighted image segmented into four different tissues: white matter WM, cortical gray matter (GM), deep GM, and cerebrospinal fluid (CSF). Additive color maps display the spatial distribution of sub-bin fractions (from low to high R2: green, red, blue), and of cortical (green) and deep (red) GM. (d) Color-coded composite images showing the contributions of different bins (red = thin, green = thick, blue = big) and conventional R1-based segmentation labels (red = WM, green = cortical + deep GM, blue = CSF).
The minor differences between the relaxation rates of the thin and thick components are also observed in the scatter plots of Fig. 5. A more detailed analysis shows that distinct R2 rates can be consistently detected in voxels containing GM+WM mixtures (see Fig. 6a), where conventional 1D R2 distributions fail to resolve the subtle differences between components (Whittall et al., 1997). The second and third columns of Fig. 6a display mixed voxels, where the thin and thick populations each account for at least 30 % of the total measured signal. Approximately 75 % of the mixed voxels exhibit R2 differences greater than the estimated uncertainties, thus providing evidence that the differentiation between the R2 rates of the two bins is indeed a meaningful result.
All bin-resolved 〈E[R2]〉 plots in Fig. 5 display a secondary cluster at high R2 values. Inspection of Fig. 6b reveals that the fast relaxing cluster corresponds to the nonmasked extra-meningeal tissues and, for the thin fraction, to the pallidum (region 1 in Fig. 6b), a major component of the basal ganglia structures located deep in the brain. The contributions from the high-R2 components are observed to be concentrated around R2=30 s−1 (see Fig. 6a), the upper R2-limit of the Monte Carlo inversion procedure. The “pile-up” of fast-relaxing contributions around the maximum allowed R2 value is a well-known artifact of Laplace inversions (Saab et al., 1999).
The 〈E[R2]〉 map of the thick bin features three main R2 populations: high R2 in the skull region (red voxels), low R2 in peripheral brain regions (green voxels), and intermediate R2 values in the inner brain regions (yellow voxels). To more easily inspect the spatial distribution of the various populations within the thick bin we delimited the (−3.5 < $\mathrm{log}\left({D}_{\mathrm{|}\mathrm{|}}/{D}_{\perp }\right)$ < 0.6, −10 < log (Diso/m2 s−1) < −8.7) subspace in three separate R2 regions, and defined the “low” (−0.5 < log(R2/s−1) < 1.2), “medium” (1.2 < log(R2/s−1) < 1.4), and “high” (1.4 < log(R2/s−1) < 2) sub-bins of Fig. 6c. In T2 units, the low, medium, and high bins correspond to 63 ms to 3.16 s, 40 to 63 ms, and 10 to 40 ms. Note that the true upper boundary of the high bin is set by the limits of the Monte Carlo inversion and is equal to R2=30 s−1; the R2=100 s−1 boundary is defined simply to render a more aesthetically pleasing plot (see Fig. 6c). The bin-resolved signal fraction maps were then compared with a high-resolution longitudinal relaxation-weighted (R1-weighted) image segmented in four tissue classes: WM, cortical GM, deep GM, and CSF. Figure 6c shows that the spatial distributions of the low and medium subfractions roughly correspond to the expected distributions of cortical GM and deep GM structures, respectively. Despite the similarities between bin-resolved and segmentation maps, the former possesses a grainier appearance and seems to miss a significant portion of deep GM tissue at the center of the slice. While the grainier aspect is caused by the higher noise of the R2D correlation dataset, the absence of central GM is explained by the presence of anisotropic tissues in structures such as the pallidum (region 1 in Fig. 6b) and the thalamus (region 2 in Fig. 6b). Those two deep GM structures are then contained within the thin bin, and not within the thick bin from which we defined the R2 subspaces. Joining the contributions of cortical and deep GM within a single tissue class offers further insight into the link between microscopic tissue composition and binning (see Fig. 6d). Comparing the three-tissue segmentation with maps of the big, thin, and thick fractions confirms that the pallidum and part of the thalamus are captured by the thin bin.
Figure 715 min protocol – bin-resolved signal contributions and mean parameter maps. (a) Map of average initial signal intensity S0 (top); subdivision of the diffusion space into the “big”, “thin”, and “thick” bins (middle); color-coded composite map of per-bin signal contributions (bottom). The colors in the bottom identify the fractions from different bins: [R,G,B] = [thin,thick,big]. (b) Parameter maps of average per-bin means (color) of transverse relaxation rate 〈E[R2]〉, isotropic diffusivity 〈E[Diso]〉, squared anisotropy $〈\mathrm{E}\left[{{D}_{\mathrm{\Delta }}}^{\mathrm{2}}\right]〉$, and diffusion tensor orientation 〈E[Orientation]. The color and brightness of the various maps follows the same convention as Fig. 4b.
## 3.4 Clinical feasibility of the R2–D correlation approach
The acquisition protocol discussed thus far can be inserted without further alterations in research studies of brain disease, where subjects are recruited for long scan sessions. However, the associated 45 min scan time impedes its use outside of a clinic-research setting. To assess the potential for clinical translation of the proposed framework, we compare the performance of the exhaustive 45 min protocol with that of an abbreviated protocol, compatible with the time frame of most clinical applications. To this end, we included two different 5D relaxation–diffusion MRI protocols in a single imaging session: the 45 min protocol described in the Methods section, and an abbreviated 15 min protocol whose details are contained in the Supplement. The two acquisition protocols were consecutively used without repositioning the volunteer.
The abbreviated dataset was inverted with the Monte Carlo algorithm described above. The resulting 5D R2D distributions and parameter maps are compiled in the Supplement. Figure 7 shows the bin-resolved parameter maps obtained with the 15 min acquisition protocol. Overall, the parameter maps derived from the abbreviated data resemble slightly noisier reproductions of the maps computed from the exhaustive protocol and provide the same conclusions. Namely, the big, thin, and thick bins demarcate the signal contributions from CSF, WM, and GM, respectively, and the main R2D properties of those same tissue types are accurately captured by the per-bin mean parameter maps. The most obvious difference between the two datasets is the lower quality of the R2 metrics derived from the abbreviated data. This is evidenced by unreasonably high R2 rates in the ventricles (see the 〈E[R2]〉 maps in Fig. 7b), and a higher difficulty in separating between the mean R2 rates of the thin and thick bins. Only 65 % of mixed voxels from the abbreviated dataset show a meaningful R2 separation, as opposed to the 75 % determined in the previous subsection. The lower resolution along the R2 dimension is most likely explained by the fact that the abbreviated protocol concentrates 85 % of its measurements within two unique values of τE, an acquisition scheme that is quite unspecific to dispersion along R2. In future experiments, we plan to address this issue by enforcing a more uniform distribution of data points along the various echo times.
4 Discussion and conclusions
The proposed framework resolves intra-voxel heterogeneity on a 5D space of transverse relaxation rates R2 and diffusion tensor parameters (Diso, DΔ, θ, φ). Per-voxel brain composition is broken down into a non-predefined number of microscopic environments with clearly distinct relaxation and diffusion properties. The heterogeneity within a voxel is thus resolved as linear combinations of independent microscopic components that can be assigned to local tissue environments; on a global scale, the subvoxel environments can be grouped into more general tissue classes. For healthy brain tissue, the detected microenvironments were classified into three broad bins whose diffusion properties respectively match those of the main constituents of the brain: WM, GM, and CSF. The separation between contributions from the three bins was observed to provide a clean 3D mapping of WM, GM, and CSF that agrees well with a conventional R1-based tissue segmentation. This demonstrates that the proposed protocol can indeed separate subvoxel tissue environments with different relaxation and diffusion properties; in the healthy human brain, the resolved environments can be coarsely assigned to contributions from CSF, WM, and GM (see Fig. 6d). The distinction between microscopic tissue environments with different R2D properties provides complementary information to R1-weighted segmentation and enables the resolution of tissue heterogeneity within a single anatomical structure, e.g. resolving anisotropic and isotropic regions within the thalamus.
The protocol presented in this work shows promise for neuroanatomy studies dealing with the resolution of specific microscopic features such as nerve fiber-tracking through heterogeneous voxels (Jeurissen et al., 2014) or free water mapping (Pasternak et al., 2009). Within a clinical setting, disentangling different tissue signals is expected to be useful for pathological conditions associated with intra-voxel tissue heterogeneity, e.g. tumor infiltration in surrounding brain tissue, inflammation of cerebral tissue, or replacement of myelin with free water. In the latter example, the proposed echo times lead to an almost complete decay of the signal contributions from myelin domains, meaning that the effects of axonal demyelination would have to be probed indirectly by tracking a reduction of the signal fraction from anisotropic subvoxel components.
Several approaches have been introduced in the diffusion MRI literature where subvoxel composition is investigated by devising signal models with increasingly complex priors and constraints (Wang et al., 2011; Zhang et al., 2012; Scherrer et al., 2016). While such models can be used to investigate the conditions mentioned in the above paragraph, the attained conclusions will be heavily dependent on the assumptions used to construct the model (Novikov et al., 2018). Hence, erroneous conclusions may be derived whenever the presupposed MR properties differ from the underlying microstructure (Lampinen et al., 2019). This limitation is alleviated in the present framework where subvoxel heterogeneity is quantified with nonparametric distributions that are retrieved from the data with minimal assumptions on the underlying tissue properties. Moreover, the vast majority of diffusion MRI models has been so far implemented with conventional Stejskal–Tanner sequences, which are known to convolve the signal contributions from DΔ and D orientation. Acquiring data at various bΔ has been shown to disentangle the effects of anisotropy and dispersion in D orientations (Eriksson et al., 2013, 2015), meaning that our 5D (τE, b) acquisition space is expected to provide a more clear component resolution whenever orientation dispersion is present.
Besides resolving the various microscopic domains within a voxel, we were also capable of observing subtle differences in component-specific relaxation rates. As mentioned before, this information is unattainable with classical multi-echo R2 distribution protocols (Whittall et al., 1997), and its extraction is facilitated by the vast correlations across the full (Diso, DΔ, θ, φ) space (de Almeida Martins and Topgaard, 2018). We would like to reinforce that small R2 differences can be observed despite the limited number and range of echo times sampled in this work; here, the separation between R2 components is mostly driven by the excellent resolution in the diffusion dimensions. The measurement of D-resolved transverse relaxation rates may complement previous work on tract-specific R1 rates (De Santis et al., 2016).
At the cellular level, the translational motion of water inside the human brain is influenced by interactions with macromolecules and partially permeable membranes forming compartments with barrier spacings ranging from nanometers for synaptic vesicles and myelin sheaths to micrometers for the plasma membranes of the axons. The diffusion of water during the 0.1 s timescale of MRI signal encoding is thus affected by a myriad of complex phenomena that are not explicitly accounted for in Eq. (2). Instead, we use the well-established approach of approximating the micrometer-scale water displacements as a distribution of anisotropic Gaussian contributions (Jian et al., 2007). The measured diffusivities may depend on the exact choice of experimental variables if the timing parameters of the gradient waveforms match the characteristic timescales of displacements between cellular barriers (Woessner, 1963) or molecular exchange between tissue environments with distinctly different diffusion properties (Kärger, 1969). By augmenting our acquisition protocol with an experimental dimension in which the spectral profiles of the gradient waveforms are comprehensively varied (Callaghan and Stepišnik, 1996; Lundell et al., 2019), microscopic barrier spacings could in principle be estimated by explicitly including the effects of restricted diffusion in the kernel of Eq. (2). Here we chose to minimize the influence of time dependence by designing waveforms with similar gradient-modulation spectra.
In the previous section, we mentioned that prolate (DΔ > 0) and oblate (DΔ < 0) diffusion tensors with $|{D}_{\mathrm{\Delta }}|$ < 0.5 result in similar signal decays (Eriksson et al., 2015). In the absence of orientational order, diffusion tensor anisotropy is detected as a deviation from a mono-exponential signal decay, which, to first order, is proportional to ${{D}_{\mathrm{\Delta }}}^{\mathrm{2}}$ (Eriksson et al., 2015). Consequently, the magnitude of DΔ can be easily determined at moderate b values while the sign may require data acquired with b values up to 4×109 sm−2 (Eriksson et al., 2015) and echo times comparable to the ones registered in this work; currently, such acquisition parameters can only be achieved with a specialized scanner (Setsompop et al., 2013; Jones et al., 2018).
Resolving and separately characterizing intra- and extra-axonal compartments in brain tissue has been of long-standing interest in the MRI field (Does, 2018). Recently, Veraart et al. (2017) estimated subtle differences in R2 and diffusivity parameters for the intra- and extra-axonal components of human brain white matter by applying a constrained two-component model to data acquired with a conventional relaxation–diffusion correlation protocol relying on the Stejskal–Tanner experiment. The obtained R2 values differ by less than a factor of 2 while the Diso values are nearly identical and the DΔ values are 1 (by constraint) and approximately 0.5 for the intra- and extra-cellular compartments, respectively. Comparing with the nonparametric distributions in Fig. 2, we note that components with such similar properties would be virtually impossible to resolve in our minimally constrained approach despite the additional information added by the b-tensor shape dimension. The limited resolution is consistent with the fact that Eq. (2) states an ill-posed inverse problem accommodating multiple nonunique solutions – probably also including the one with two thin components as assumed by Veraart et al. We suggest that the unconstrained inversion could be used as a first analysis tool to define the boundaries of a more ambitious model incorporating additional information, e.g. from microanatomy studies that is not directly observable in the MRI data.
This work introduces and demonstrates a novel MRI framework, in which the microscopic heterogeneity of the living human brain is characterized via 5D correlations between the transverse relaxation rate R2, isotropic diffusivities Diso, normalized diffusion anisotropy DΔ, and diffusion tensor orientation (θ, φ). The correlations allow model-free estimation of per-voxel relaxation–diffusion distributions P(R2,D) that combine the chemical sensitivity of R2 with the link between microstructure and the diffusion metrics. The rich information content of P(R2,D) is reported through a set of 21 unique maps obtained by binning and parameter calculation in the 5D distribution space. Being specific to different tissue types while relying on few assumptions, the presented protocol shows promise for explorative neuroscience and clinical studies in which microscopic tissue composition cannot be presumed a priori. While the spatial resolution of the data acquired in this work was relatively limited, sacrificing resolution for SNR, there are several avenues to explore in the future, in hardware, acquisition, and analysis that will boost the SNR per unit time, thereby increasing the potential for improved resolution. From the hardware perspective, the use of ultra-high fields (7T and above), and ultra-strong field gradients (Setsompop et al., 2013; Jones et al., 2018), can boost SNR and reduce echo-time-per-unit-b value, respectively. For example, as noted in Jones et al. (2018), for bΔ=0 encoding, the shorter τE afforded by stronger gradients such as those available on a Connectom scanner (300 mT m−1) results in an improvement in SNR of approximately 50 % compared to that achievable on the system used in this study (80 mT m−1 gradients). From the acquisition perspective, multi-band acquisition schemes (Barth et al., 2016) can speed up overall acquisition times and facilitate a wide brain coverage with smaller voxel sizes. Moreover, replacing the rectilinear echo-planar readout (Turner et al., 1991) with a spiral readout (Wilm et al., 2017) can help to further reduce the echo time, boosting SNR which could be traded for higher spatial resolution. From the analysis side, as noted in the Methods section, no denoising approaches were applied here. Recent advances in denoising and/or joint reconstruction (Veraart et al., 2016; Bazin et al., 2019; Wang et al., 2019; Haldar et al., 2020) could further enhance the SNR, allowing resolution to be pushed higher. Finally, the presented framework can be merged with MRI fingerprinting methodology (Ma et al., 2013), whose pattern-matching algorithms may considerably boost the data inversion speed.
Code and data availability
Code and data availability.
The software analysis tools discussed in this paper are available for download from a public GitHub repository: https://github.com/JoaoPdAMartins/md-dmri (last access: 25 February 2020) (Nilsson et al., 2018b). The presented in vivo data may be directly requested from the authors.
Supplement
Supplement.
Author contributions
Author contributions.
DKJ, C-FW, and DT conceived the project. JPdAM, CMWT, and FS designed the acquisition protocol. FS and C-FW acquired the data. The nonparametric Monte Carlo algorithm was designed by JPdAM and DT, and the data analysis was performed by JPdAM, CMWT, and DT. JPdAM and DT wrote the manuscript, and all authors read and reviewed the manuscript.
Competing interests
Competing interests.
Daniel Topgaard owns shares in and João P. de Almeida Martins is partially employed by the private company Random Walk Imaging AB (Lund, Sweden), which holds patents related to the described method. All other authors declare no competing interests.
Acknowledgements
Acknowledgements.
The authors thank Scott Hoge for his assistance with the MRI measurements. João P. de Almeida Martins and Daniel Topgaard were financially supported by the Swedish Foundation for Strategic Research (AM13-0090, ITM17-0267) and the Swedish Research Council (2014-3910, 2018-03697). Chantal M. W. Tax is supported by a Rubicon grant (680-50-1527) from the Netherlands Organisation for Scientific Research (NWO). Filip Szczepankiewicz and Carl-Fredrik Westin are both supported by a National Institutes of Health grant (P41EB015902). Derek K. Jones is supported by a Wellcome Investigator Award (096646/Z/11/Z) and a Wellcome Strategic Award (104943/Z/14/Z).
Financial support
Financial support.
This research has been supported by the Stiftelsen för Strategisk Forskning (grant nos. AM13-0090 and ITM17-0267), the Vetenskapsrådet (grant nos. 2014-3910 and 2018-03697), the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (grant no. 680-50-1527), the National Institutes of Health (grant no. P41EB015902), and the Wellcome Trust (grant nos. 096646/Z/11/Z and 104943/Z/14/Z).
Review statement
Review statement.
This paper was edited by Markus Barth and reviewed by two anonymous referees.
References
Assaf, Y.: Imaging laminar structures in the gray matter with diffusion MRI, Neuroimage, 197, 677–688, https://doi.org/10.1016/j.neuroimage.2017.12.096, 2018.
Bak, M. and Nielsen, N. C.: REPULSION, a novel approach to efficient powder averaging in solid-state NMR, J. Magn. Reson., 125, 132–139, https://doi.org/10.1006/jmre.1996.1087, 1997.
Barth, M., Breuer, F., Koopmans, P. J., Norris, D. G., and Poser, B. A.: Simultaneous multislice (SMS) imaging techniques, Magn. Reson. Med., 75, 63–81, https://doi.org/10.1002/mrm.25897, 2016.
Basser, P. J. and Pierpaoli, C.: Microstructural and physiological features of tissues elucidated by quantitative-diffusion-tensor MRI, J. Magn. Reson. Ser. B, 111, 209–219, https://doi.org/10.1016/j.jmr.2011.09.022, 1996.
Bazin, P.-L., Alkemade, A., van der Zwaag, W., Caan, M., Mulder, M., and Forstmann, B. U.: Denoising High-Field Multi-Dimensional MRI With Local Complex PCA, Front. Neurosci.-Switz., 13, 1066, https://doi.org/10.3389/fnins.2019.01066, 2019.
Benjamini, D. and Basser, P. J.: Magnetic resonance microdynamic imaging reveals distinct tissue microenvironments, Neuroimage, 163, 183–196, https://doi.org/10.1016/j.neuroimage.2017.09.033, 2017.
Callaghan, P. T. and Stepišnik, J.: Generalized analysis of motion using magnetic field gradients, in: Advances in magnetic and optical resonance, Elsevier, 325–388, 1996.
Daoust, A., Dodd, S., Nair, G., Bouraoud, N., Jacobson, S., Walbridge, S., Reich, D. S., and Koretsky, A.: Transverse relaxation of cerebrospinal fluid depends on glucose concentration, Magn. Reson. Imaging, 44, 72–81, https://doi.org/10.1016/j.mri.2017.08.001, 2017.
de Almeida Martins, J. P. and Topgaard, D.: Two-Dimensional Correlation of Isotropic and Directional Diffusion Using NMR, Phys. Rev. Lett., 116, 087601, https://doi.org/10.1103/PhysRevLett.116.087601, 2016.
de Almeida Martins, J. P. and Topgaard, D.: Multidimensional correlation of nuclear relaxation rates and diffusion tensors for model-free investigations of heterogeneous anisotropic porous materials, Sci. Rep., 8, 2488, https://doi.org/10.1038/s41598-018-19826-9, 2018.
De Santis, S., Barazany, D., Jones, D. K., and Assaf, Y.: Resolving relaxometry and diffusion properties within the same voxel in the presence of crossing fibres by combining inversion recovery and diffusion-weighted acquisitions, Magn. Reson. Med., 75, 372–380, https://doi.org/10.1002/mrm.25644, 2016.
Does, M. D.: Inferring brain tissue composition and microstructure via MR relaxometry, Neuroimage, 182, 136–148, https://doi.org/10.1016/j.neuroimage.2017.12.087, 2018.
English, A. E., Whittal, K. P., Joy, M. L. G., and Henkelman, R. M.: Quantitative two-dimensional time correlation relaxometry, Magn. Reson. Med., 22, 425–434, https://doi.org/10.1002/mrm.1910220250, 1991.
Eriksson, S., Lasic, S., and Topgaard, D.: Isotropic diffusion weighting in PGSE NMR by magic-angle spinning of the q-vector, J. Magn. Reson., 226, 13–18, https://doi.org/10.1016/j.jmr.2012.10.015, 2013.
Eriksson, S., Lasic, S., Nilsson, M., Westin, C. F., and Topgaard, D.: NMR diffusion-encoding with axial symmetry and variable anisotropy: Distinguishing between prolate and oblate microscopic diffusion tensors with unknown orientation distribution, J. Chem. Phys., 142, 104201, https://doi.org/10.1063/1.4913502, 2015.
Frydman, L., Chingas, G. C., Lee, Y. K., Grandinetti, P. J., Eastman, M. A., Barrall, G. A., and Pines, A.: Variable-angle correlation spectroscopy in solid-state nuclear magnetic resonance, J. Chem. Phys., 97, 4800–4808, https://doi.org/10.1063/1.463860, 1992.
Galvosas, P. and Callaghan, P. T.: Multi-dimensional inverse Laplace spectroscopy in the NMR of porous media, C. R. Physique, 11, 172–180, https://doi.org/10.1016/j.crhy.2010.06.014, 2010.
Gan, Z.: High-resolution chemical shift and chemical shift anisotropy correlation in solids using slow magic angle spinning, J. Am. Chem. Soc., 114, 8307–8309, https://doi.org/10.1021/ja00047a062, 1992.
Haldar, J. P., Liu, Y., Liao, C., Fan, Q., and Setsompop, K.: Fast submillimeter diffusion MRI using gSlider-SMS and SNR-enhancing joint reconstruction, Magn. Reson. Med., in press, https://doi.org/10.1002/mrm.28172, 2020.
Halle, B.: Molecular theory of field-dependent proton spin-lattice relaxation in tissue, Magn. Reson. Med., 56, 60–72, https://doi.org/10.1002/mrm.20919, 2006.
Istratov, A. A. and Vyvenko, O. F.: Exponential analysis in physical phenomena, Rev. Sci. Instrum., 70, 1233–1257, https://doi.org/10.1063/1.1149581, 1999.
Jeurissen, B., Tournier, J. D., Dhollander, T., Connelly, A., and Sijbers, J.: Multi-tissue constrained spherical deconvolution for improved analysis of multi-shell diffusion MRI data, Neuroimage, 103, 411–426, https://doi.org/10.1016/j.neuroimage.2014.07.061, 2014.
Jian, B., Vemuri, B. C., Özarslan, E., Carney, P. R., and Mareci, T. H.: A novel tensor distribution model for the diffusion-weighted MR signal, Neuroimage, 37, 164–176, https://doi.org/10.1016/j.neuroimage.2007.03.074, 2007.
Jones, D. K.: Diffusion MRI, Oxford University Press, 2010.
Jones, D. K. and Cercignani, M.: Twenty-five pitfalls in the analysis of diffusion MRI data, NMR Biomed., 23, 803–820, https://doi.org/10.1002/nbm.1543, 2010.
Jones, D. K., Horsfield, M. A., and Simmons, A.: Optimal strategies for measuring diffusion in anisotropic systems by magnetic resonance imaging, Magn. Reson. Med., 42, 515–525, https://doi.org/10.1002/(SICI)1522-2594(199909)42:3<515::AID-MRM14>3.0.CO;2-Q, 1999.
Jones, D. K., Alexander, D. C., Bowtell, R., Cercignani, M., Dell'Acqua, F., McHugh, D. J., Miller, K. L., Palombo, M., Parker, G. J. M., Rudrapatna, U. S., and Tax, C. M. W.: Microstructural imaging of the human brain with a “super-scanner”: 10 key advantages of ultra-strong gradients for diffusion MRI, Neuroimage, 182, 8–38, https://doi.org/10.1016/j.neuroimage.2018.05.047, 2018.
Kärger, J.: Zur Bestimmung der Diffusion in einem Zweibereichsystem mit Hilfe von gepulsten Feldgradienten, Ann. Phys., 479, 1–4, https://doi.org/10.1002/andp.19694790102, 1969.
Klein, S., Staring, M., Murphy, K., Viergever, M. A., and Pluim, J. P.: Elastix: a toolbox for intensity-based medical image registration, IEE Trans. Med. Imaging, 29, 196–205, 2009.
Kubicki, M., McCarley, R., Westin, C.-F., Park, H.-J., Maier, S., Kikinis, R., Jolesz, F. A., and Shenton, M. E.: A review of diffusion tensor imaging studies in schizophrenia, J. Psychiatr. Res., 41, 15–30, https://doi.org/10.1016/j.jpsychires.2005.05.005, 2007.
Lampinen, B., Szczepankiewicz, F., Noven, M., van Westen, D., Hansson, O., Englund, E., Martensson, J., Westin, C. F., and Nilsson, M.: Searching for the neurite density with diffusion MRI: Challenges for biophysical modeling, Hum. Brain Mapp., 40, 2529–2545, https://doi.org/10.1002/hbm.24542, 2019.
Lasič, S., Szczepankiewicz, F., Eriksson, S., Nilsson, M., and Topgaard, D.: Microanisotropy imaging: quantification of microscopic diffusion anisotropy and orientational order parameter by diffusion MRI with magic-angle spinning of the q-vector, Front. Phys., 2, 11, https://doi.org/10.3389/fphy.2014.00011, 2014.
Laule, C., Bjarnason, T. A., Vavasour, I. M., Traboulsee, A. L., Moore, G. W., Li, D. K., and MacKay, A. L.: Characterization of brain tumours with spin–spin relaxation: pilot case study reveals unique T2 distribution profiles of glioblastoma, oligodendroglioma and meningioma, J. Neurol., 264, 2205–2214, https://doi.org/10.1007/s00415-017-8609-6, 2017.
Lawson, C. L. and Hanson, R. J.: Solving least squares problems, Prentice-Hall, Englewood Cliffs, NJ, 1974.
Le Bihan, D.: Molecular diffusion, tissue microdynamics and microstructure, NMR Biomed., 8, 375–386, https://doi.org/10.1002/nbm.1940080711, 1995.
Lerch, J. P., van der Kouwe, A. J., Raznahan, A., Paus, T., Johansen-Berg, H., Miller, K. L., Smith, S. M., Fischl, B., and Sotiropoulos, S. N.: Studying neuroanatomy using MRI, Nat. Neurosci., 20, 314–326, https://doi.org/10.1038/nn.4501, 2017.
Lundell, H., Nilsson, M., Dyrby, T. B., Parker, G. J. M., Cristinacce, P. L. H., Zhou, F. L., Topgaard, D., and Lasič, S.: Multidimensional diffusion MRI with spectrally modulated gradients reveals unprecedented microstructural detail, Sci. Rep., 9, 9026, https://doi.org/10.1038/s41598-019-45235-7, 2019.
Ma, D., Gulani, V., Seiberlich, N., Liu, K., Sunshine, J. L., Duerk, J. L., and Griswold, M. A.: Magnetic resonance fingerprinting, Nature, 495, 187–192, https://doi.org/10.1038/nature11971, 2013.
Mackay, A., Whittall, K., Adler, J., Li, D., Paty, D., and Graeb, D.: In vivo visualization of myelin water in brain by magnetic resonance, Magn. Reson. Med., 31, 673–677, https://doi.org/10.1002/mrm.1910310614, 1994.
Mitchell, J., Chandrasekera, T. C., and Gladden, L. F.: Numerical estimation of relaxation and diffusion distributions in two dimensions, Prog. Nucl. Magn. Reson. Spectrosc., 62, 34–50, https://doi.org/10.1016/j.pnmrs.2011.07.002, 2012.
Mitra, P. P.: Multiple wave-vector extension of the NMR pulsed-field-gradient spin-echo diffusion measurement, Phys. Rev. B, 51, 15074–15078, https://doi.org/10.1103/PhysRevB.51.15074, 1995.
Nilsson, M., Szczepankiewicz, F., van Westen, D., and Hansson, O.: Extrapolation-Based References Improve Motion and Eddy-Current Correction of High B-Value DWI Data: Application in Parkinson's Disease Dementia, PLoS One, 10, e0141825, https://doi.org/10.1371/journal.pone.0141825, 2015.
Nilsson, M., Englund, E., Szczepankiewicz, F., van Westen, D., and Sundgren, P. C.: Imaging brain tumour microstructure, Neuroimage, 182, 232–250, https://doi.org/10.1016/j.neuroimage.2018.04.075, 2018a.
Nilsson, M., Szczepankiewicz, F., Lampinen, B., Ahlgren, A., De Almeida Martins, J. P., Lasic, S., Westin, C.-F., and Topgaard, D.: An open-source framework for analysis of multidimensional diffusion MRI data implemented in MATLAB, in: Proceedings of the 26th Annual Meeting of ISMRM, Paris, France, 16–21 June 2018, 5355, 2018b.
Novikov, D. S., Kiselev, V. G., and Jespersen, S. N.: On modeling, Magn. Reson. Med., 79, 3172–3193, https://doi.org/10.1002/mrm.27101, 2018.
Padhani, A. R., Liu, G., Mu-Koh, D., Chenevert, T. L., Thoeny, H. C., Takahara, T., Dzik-Jurasz, A., Ross, B. D., Van Cauteren, M., Collins, D., Hammoud, D. A., Rustin, G. J. S., Taouli, B., and Choyke, P. L.: Diffusion-Weighted Magnetic Resonance Imaging as a Cancer Biomarker: Consensus and Recommendations, Neoplasia, 11, 102–125, https://doi.org/10.1593/neo.81328, 2009.
Pasternak, O., Sochen, N., Gur, Y., Intrator, N., and Assaf, Y.: Free Water Elimination and Mapping from Diffusion MRI, Magn. Reson. Med., 62, 717–730, https://doi.org/10.1002/mrm.22055, 2009.
Pierpaoli, C. and Basser, P. J.: Toward a quantitative assessment of diffusion anisotropy, Magn. Res. Med., 36, 893–906, https://doi.org/10.1002/mrm.1910360612, 1996.
Pierpaoli, C., Jezzard, P., Basser, P. J., Barnett, A., and Di Chiro, G.: Diffusion tensor MR imaging of the human brain, Radiology, 201, 637–648, https://doi.org/10.1148/radiology.201.3.8939209, 1996.
Prange, M. and Song, Y. Q.: Quantifying uncertainty in NMR T2 spectra using Monte Carlo inversion, J. Magn. Reson., 196, 54–60, https://doi.org/10.1016/j.jmr.2008.10.008, 2009.
Saab, G., Thompson, R. T., and Marsh, G. D.: Multicomponent T2 relaxation of in vivo skeletal muscle, Magn. Reson. Med., 42, 150–157, https://doi.org/10.1002/(SICI)1522-2594(199907)42:1<150::AID-MRM20>3.0.CO;2-5, 1999.
Scherrer, B., Schwartzman, A., Taquet, M., Sahin, M., Prabhu, S. P., and Warfield, S. K.: Characterizing brain tissue by assessment of the distribution of anisotropic microstructural environments in diffusion-compartment imaging (DIAMOND), Magn. Reson. Med., 76, 963–977, 2016.
Schmidt-Rohr, K. and Spiess, H. W.: Multidimensional solid-state NMR and polymers, Academic Press, 1994.
Setsompop, K., Kimmlingen, R., Eberlein, E., Witzel, T., Cohen-Adad, J., McNab, J. A., Keil, B., Tisdall, M. D., Hoecht, P., Dietz, P., Cauley, S. F., Tountcheva, V., Matschl, V., Lenz, V. H., Heberlein, K., Potthast, A., Thein, H., Van Horn, J., Toga, A., Schmitt, F., Lehne, D., Rosen, B. R., Wedeen, V., and Wald, L. L.: Pushing the limits of in vivo diffusion MRI for the Human Connectome Project, Neuroimage, 80, 220–233, https://doi.org/10.1016/j.neuroimage.2013.05.078, 2013.
Sjölund, J., Szczepankiewicz, F., Nilsson, M., Topgaard, D., Westin, C.-F., and Knutsson, H.: Constrained optimization of gradient waveforms for generalized diffusion encoding, J. Magn. Reson., 261, 157–168, https://doi.org/10.1016/j.jmr.2015.10.012, 2015.
Song, Y. Q.: Magnetic resonance of porous media (MRPM): a perspective, J. Magn. Reson., 229, 12–24, https://doi.org/10.1016/j.jmr.2012.11.010, 2013.
Stejskal, E. O. and Tanner, J. E.: Spin diffusion measurements: Spin echoes in the presence of a time-dependent field gradient, J. Chem. Phys., 42, 288–292, https://doi.org/10.1063/1.1695690, 1965.
Szczepankiewicz, F., Westin, C. F., and Nilsson, M.: Maxwell-compensated design of asymmetric gradient waveforms for tensor-valued diffusion encoding, Magn. Reson. Med., 82, 1424–1437, https://doi.org/10.1002/mrm.27828, 2019.
Tofts, P.: Quantitative MRI of the Brain: Measuring Changes Caused by Disease, John Wiley & Sons, 2003.
Topgaard, D.: Multidimensional diffusion MRI, J. Magn. Reson., 275, 98–113, https://doi.org/10.1016/j.jmr.2016.12.007, 2017.
Topgaard, D.: Diffusion tensor distribution imaging, NMR Biomed., 32, e4066, https://doi.org/10.1002/nbm.4066, 2019.
Topgaard, D. and Söderman, O.: Self-diffusion in two-and three-dimensional powders of anisotropic domains: An NMR study of the diffusion of water in cellulose and starch, J. Phys. Chem. B, 106, 11887–11892, https://doi.org/10.1021/jp020130p, 2002.
Turner, R., Le Bihan, D., and Scott Chesnicks, A.: Echo-planar imaging of diffusion and perfusion, Magn. Reson. Med., 19, 247–253, https://doi.org/10.1002/mrm.1910190210, 1991.
Veraart, J., Novikov, D. S., Christiaens, D., Ades-Aron, B., Sijbers, J., and Fieremans, E.: Denoising of diffusion MRI using random matrix theory, Neuroimage, 142, 394–406, https://doi.org/10.1016/j.neuroimage.2016.08.016, 2016.
Veraart, J., Novikov, D. S., and Fieremans, E.: TE dependent Diffusion Imaging (TEdDI) distinguishes between compartmental T2 relaxation times, Neuroimage, 182, 360–369, https://doi.org/10.1016/j.neuroimage.2017.09.030, 2017.
Wang, H., Zheng, R., Dai, F., Wang, Q., and Wang, C.: High-field mr diffusion-weighted image denoising using a joint denoising convolutional neural network, J. Magn. Reson. Imaging, 50, 1937–1947, https://doi.org/10.1002/jmri.26761, 2019.
Wang, Y., Wang, Q., Haldar, J. P., Yeh, F.-C., Xie, M., Sun, P., Tu, T.-W., Trinkaus, K., Klein, R. S., Cross, A. H., and Song, S.-K.: Quantification of increased cellularity during inflammatory demyelination, Brain, 134, 3590–3601, https://doi.org/10.1093/brain/awr307, 2011.
Whittall, K. P. and MacKay, A. L.: Quantitative interpretation of NMR relaxation data, J. Magn. Reson., 84, 134–152, https://doi.org/10.1016/0022-2364(89)90011-5, 1989.
Whittall, K. P., Mackay, A. L., Graeb, D. A., Nugent, R. A., Li, D. K., and Paty, D. W.: In vivo measurement of T2 distributions and water contents in normal human brain, Magn. Reson. Med., 37, 34–43, https://doi.org/10.1002/mrm.1910370107, 1997.
Wilm, B. J., Barmet, C., Gross, S., Kasper, L., Vannesjo, S. J., Haeberlin, M., Dietrich, B. E., Brunner, D. O., Schmid, T., and Pruessmann, K. P.: Single-shot spiral imaging enabled by an expanded encoding model: Demonstration in diffusion MRI, Magn. Reson. Med., 77, 83–91, https://doi.org/10.1002/mrm.26493, 2017.
Woessner, D. E.: N.M.R. spin-echo self-diffusion measurements on fluids undergoing restricted diffusion, J. Phys. Chem., 67, 1365–1367, https://doi.org/10.1021/j100800a509, 1963.
Zatorre, R. J., Fields, R. D., and Johansen-Berg, H.: Plasticity in gray and white: neuroimaging changes in brain structure during learning, Nat. Neurosci., 15, 528–536, https://doi.org/10.1038/nn.3045, 2012.
Zhang, H., Schneider, T., Wheeler-Kingshott, C. A., and Alexander, D. C.: NODDI: Practical in vivo neurite orientation dispersion and density imaging of the human brain, Neuroimage, 61, 1000–1016, https://doi.org/10.1016/j.neuroimage.2012.03.072, 2012.
Zhang, Y. and Blumich, B.: Spatially resolved D-T2 correlation NMR of porous media, J. Magn. Reson., 242, 41–48, https://doi.org/10.1016/j.jmr.2014.01.017, 2014.
|
2020-04-01 17:18:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 56, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7481252551078796, "perplexity": 5055.381301049988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505826.39/warc/CC-MAIN-20200401161832-20200401191832-00227.warc.gz"}
|
https://diabetesjournals.org/view-large/4222259
|
Table 4—
Spearman correlations between ASP domain scores and CASS scores for patients with type 1 and type 2 diabetes*
ASP domainsCASS scales
Orthostatic intolerance −0.03 ≈ 0.07 0.08 ≈ 0.00 0.03 ≈ −0.17 0.04 ≈ −0.03
Secretomotor 0.32 ≈ 0.03 0.24 ≈ 0.06 0.40 > 0.04§ 0.37 > 0.05§
Urinary 0.21 ≈ 0.00 0.24 ≈ 0.12 0.32 > −0.05§ 0.37 > 0.05§
Diarrhea 0.11 ≈ −0.03 0.20 ≈ 0.02 −0.07 ≈ −0.18 0.08 ≈ −0.09
Constipation 0.17 ≈ 0.00 0.28 ≈ 0.14 0.11 ≈ 0.12 0.22 ≈ 0.12
Sleep 0.22 ≈ 0.03 0.35 > 0.01§ 0.27 ≈ −0.04 0.41 >> −0.04§
Pupillomotor 0.35 > −0.03§ 0.30 > −0.12§ 0.13 ≈ −0.06 0.35 >> −0.09§
Male sexual failure −0.04 ≈ −0.04 0.06 ≈ 0.07 0.12 ≈ −0.21 0.06 ≈ −0.16
Vasomotor 0.10 ≈ −0.04 0.01 ≈ 0.07 0.11 ≈ 0.08 0.07 ≈ 0.06
Upper gastrointestinal symptoms 0.04 ≈ 0.02 0.28 ≈ 0.11 0.13 ≈ −0.09 0.19 ≈ 0.01
Syncope −0.18 ≈ −0.05 0.03 ≈ −0.23§ −0.17 ≈ −0.12 −0.12 ≈ −0.19
ASP domainsCASS scales
Orthostatic intolerance −0.03 ≈ 0.07 0.08 ≈ 0.00 0.03 ≈ −0.17 0.04 ≈ −0.03
Secretomotor 0.32 ≈ 0.03 0.24 ≈ 0.06 0.40 > 0.04§ 0.37 > 0.05§
Urinary 0.21 ≈ 0.00 0.24 ≈ 0.12 0.32 > −0.05§ 0.37 > 0.05§
Diarrhea 0.11 ≈ −0.03 0.20 ≈ 0.02 −0.07 ≈ −0.18 0.08 ≈ −0.09
Constipation 0.17 ≈ 0.00 0.28 ≈ 0.14 0.11 ≈ 0.12 0.22 ≈ 0.12
Sleep 0.22 ≈ 0.03 0.35 > 0.01§ 0.27 ≈ −0.04 0.41 >> −0.04§
Pupillomotor 0.35 > −0.03§ 0.30 > −0.12§ 0.13 ≈ −0.06 0.35 >> −0.09§
Male sexual failure −0.04 ≈ −0.04 0.06 ≈ 0.07 0.12 ≈ −0.21 0.06 ≈ −0.16
Vasomotor 0.10 ≈ −0.04 0.01 ≈ 0.07 0.11 ≈ 0.08 0.07 ≈ 0.06
Upper gastrointestinal symptoms 0.04 ≈ 0.02 0.28 ≈ 0.11 0.13 ≈ −0.09 0.19 ≈ 0.01
Syncope −0.18 ≈ −0.05 0.03 ≈ −0.23§ −0.17 ≈ −0.12 −0.12 ≈ −0.19
*
Group sizes for individual correlations vary due to missing data in some domains;
correlation for type 1 group listed first in each cell, followed by correlation for type 2 group;
correlation significantly different from zero at the 0.05 level;
§
correlation significantly different from zero at the 0.01 level. ≈, correlation between CASS domain and ASP domain not significantly different for type 1 and type 2 groups; >, correlation between CASS domain and ASP domain is significantly different for type 1 and type 2 groups at the 0.05 level; >>, correlation between CASS domain and ASP domain is significantly different for type 1 and type 2 groups at the 0.01 level.
Close Modal
|
2022-11-27 16:34:19
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997543692588806, "perplexity": 11070.29990640379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710409.16/warc/CC-MAIN-20221127141808-20221127171808-00470.warc.gz"}
|
https://heliosgp.wordpress.com/2010/02/23/a-sequence-not-found-in-the-oeis-part-14/
|
Sequence name: A001235 mod 13.
0, 9, 0, 0, 7, 0, 6, 0, 12, 0,
0, 9, 11, 6, 0, 0, 9, 6, 0, 4,
0, 9, 7, 0, 0, 0, 4, 7, 5
Examples: a(1) = 0, because 1729 =7*13*19. a(2) = 9, because 4104 = 32*5*7*13 + 9. a(19) = 0, because 216125 = 53*7*13*19….
PS
Computation was performed by using D and Java. And I welcome more terms.
## 5 thoughts on “A sequence not found in the OEIS: part 14”
1. Dear Jun Mizuki
With this line of code in PARI/GP
for(n=1,20,print(n ” : ” ((n-1)!/(2^(n+1))))
we get
1 : 1/4
2 : 1/8
3 : 1/8
4 : 3/16
5 : 3/8
6 : 15/16
7 : 45/16
8 : 315/32
9 : 315/8
10 : 2835/16
11 : 14175/16
12 : 155925/32
13 : 467775/16
14 : 6081075/32
15 : 42567525/32
16 : 638512875/64
17 : 638512875/8
18 : 10854718875/16
19 : 97692469875/16
20 : 1856156927625/32
How can this sequence be submitted to OEIS?
If you are interested we can do it together.
(I posted here
http://problemasteoremas.wordpress.com/2010/03/22/solucao-do-desafio-sobre-sequencias-sucessoes-descobrir-o-termo-geral-solution-to-the-challenge-find-the-general-term-of-a-sequence/
this sequence as a challenge of mine, with a few more explanations).
Americo Tavares
Like
2. Dear Jun,
The LaTeX code of the rational sequence is
$x_{n}=\frac{(n-1)!}{2^{n+1}}=\frac{((n-1)!)/\gcd ((n-1)!,2^{n+1})}{2^{n+1}/\gcd ((n-1)!,2^{n+1})}$
and the PARI/GP code can be modified to
for(n=1,50,print(n ” : ” ((n-1)!)/gcd((n-1)!,2^(n+1)))/((2^(n+1)/gcd((n-1)!,2^(n+1)))))))
which is perhaps better for converting it into other languages.
I got as 50th term
8644205195683235286768595007647709520704677734375/32
i.e. the 50th term of your Sequence 1 is
8644205195683235286768595007647709520704677734375
while
32
is the 50th term of Sequence 2.
I hope having written above the correct number of parenthesis.
Américo
Like
3. To a correct display here:
50 :
864420519568323528676859
5007647709520704677734375
/32
Like
4. Dear Américo,
Thanks for your LaTeX code and the 50th term!
Regards,
Like
|
2017-07-28 12:39:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7954466342926025, "perplexity": 3345.725885512688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500550967030.98/warc/CC-MAIN-20170728123647-20170728143647-00071.warc.gz"}
|
https://www.physicsforums.com/threads/independent-events-probability.571986/
|
# Homework Help: Independent Events Probability
1. Jan 29, 2012
### lina29
1. The problem statement, all variables and given/known data
During a winter season at one ski resort, there are two roads from Area A to Area B and two roads from Area B to Area C. Each of the four roads is blocked by snow with a probability p=.25 independently of the other roads.
What is the probability that there exists an open route from Area A to Area C
It was decided to add a new path connecting areas A and C directly. It is also blocked by snow with probability p=.25 independently of all the other paths. Now what is the probability that there is an open route from area A to area C.
2. Relevant equations
3. The attempt at a solution
For the first part I assumed that at least one road had to be open from A to B so I got .4375 which is the same for at least one road from B to C. And then that both A to B and B to C had to be open so I got .191 which was wrong.
Any help would be appreciated
2. Jan 29, 2012
### jimbobian
Well, I'm sure there is probably an easier way but there is nothing wrong with drawing a good old probability tree. It will be quite a big one (as there are four roads and so 4 "rounds" to the tree). But once you've drawn the tree, figure out which branches correspond to it being possible to get from A to C, then figure out their probabilities and you should get the answer.
James
3. Jan 29, 2012
### LCKurtz
Remember that the probability of a road between two points being open is 1 minus the probability that both are blocked.
4. Jan 29, 2012
### lina29
Right so what I did to find at least one road being open between A and B or B and C was
1-(1-.25)(1-.25)= .4375
and then to find the probability of both being open I did
.4375*.4375=.191
5. Jan 29, 2012
### jimbobian
But the probability of being blocked is .25, so you've worked out the opposite!
6. Jan 29, 2012
### lina29
Ohh so what I would do is 1-(1-.75)(1-.75)= .5
and then
.5*.5=.25 which would be the final answer for the first part?
7. Jan 29, 2012
### jimbobian
I agree with the logic, not the answer ;)
8. Jan 29, 2012
### lina29
sorry :)
it would be 1-(1-.75)(1-.75)= .9375
and then
.9375*.9375=.8789 right?
For the second part how would I approach it?
9. Jan 29, 2012
### jimbobian
Sounds good.
Well now you have the probability that there is a route to C through B that is open, and you also know the probability of getting directly from A to C. Can you think of a way of combining these to get the second answer?
10. Jan 29, 2012
### lina29
my though was addition but then the probabilities would be over 1
11. Jan 29, 2012
### jimbobian
Probabilities over 1 are never really a good sign!
Well imagine you've got two coins, what would be the probability of at least 1 head?
12. Jan 29, 2012
### lina29
1-(1-.5)(1-.5)=.75
13. Jan 29, 2012
### jimbobian
Good, so can you see what the probability of at least one route being open is?
14. Jan 29, 2012
### lina29
1-(1-.8789)(1-.75)=.9697
15. Jan 29, 2012
### jimbobian
Yep, I would agree
16. Jan 29, 2012
### lina29
thank you!
17. Jan 29, 2012
### jimbobian
No problem, hope they're right!
18. Jan 29, 2012
### HallsofIvy
That is correct but you did it the hard way. If P(A)= .25 and P(B)= .25, and A and B are independent, $P(A and B)= .25^2= 0.0625$ the probability of both roads being blocked is 0.0625 so the probability that at least one of the roads is not is $1- P(A)P(B)= 1- .25^2= 1- 0.0625= 0.9375$, as you say.
19. Jan 29, 2012
### Ray Vickson
I get something different from all of you!
In the first problem, {AC blocked} = {both AB blocked} or {both BC blocked}, so P{AC blocked} = (1/4)(1/4) + (1/4)(1/4) - (1/4)^4 = 31/256 = .12109375, so P{AC open} = 225/256 = .87890625 . This uses P{A or B} = P{A} + P{B} - P{A & B}.
RGV
|
2018-05-24 02:53:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5355905890464783, "perplexity": 1210.952333329896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865884.49/warc/CC-MAIN-20180524014502-20180524034502-00381.warc.gz"}
|
http://www.ms.u-tokyo.ac.jp/seminar/2018/sem18-216.html
|
## トポロジー火曜セミナー
開催情報 火曜日 17:00~18:30 数理科学研究科棟(駒場) 056号室 河野 俊丈, 河澄 響矢, 北山 貴裕, 逆井卓也 http://park.itc.u-tokyo.ac.jp/MSF/topology/TuesdaySeminar/index.html Tea: 16:30 - 17:00 コモンルーム
### 2018年11月08日(木)
10:30-12:00 数理科学研究科棟(駒場) 056号室
Michael Heusener 氏 (Université Clermont Auvergne)
Deformations of diagonal representations of knot groups into $\mathrm{SL}(n,\mathbb{C})$ (ENGLISH)
[ 講演概要 ]
This is joint work with Leila Ben Abdelghani, Monastir (Tunisia).
Given a manifold $M$, the variety of representations of $\pi_1(M)$ into $\mathrm{SL}(2,\mathbb{C})$ and the variety of characters of such representations both contain information of the topology of $M$. Since the foundational work of W.P. Thurston and Culler & Shalen, the varieties of $\mathrm{SL}(2,\mathbb{C})$-characters have been extensively studied. This is specially interesting for $3$-dimensional manifolds, where the fundamental group and the geometrical properties of the manifold are strongly related.
However, much less is known of the character varieties for other groups, notably for $\mathrm{SL}(n,\mathbb{C})$ with $n\geq 3$. The $\mathrm{SL}(n,\mathbb{C})$-character varieties for free groups have been studied by S. Lawton and P. Will, and the $\mathrm{SL}(3,\mathbb{C})$-character variety of torus knot groups has been determined by V. Munoz and J. Porti.
In this talk I will present some results concerning the deformations of diagonal representations of knot groups in basic notations and some recent results concerning the representation and character varieties of $3$-manifold groups and in particular knot groups. In particular, we are interested in the local structure of the $\mathrm{SL}(n,\mathbb{C})$-representation variety at the diagonal representation.
|
2018-12-12 09:51:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6142995953559875, "perplexity": 975.3839042013916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823817.62/warc/CC-MAIN-20181212091014-20181212112514-00284.warc.gz"}
|
https://forum.doom9.org/showthread.php?s=90d64c90abc1837936b2d37b7cddd6f3&p=1717791
|
Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.
Doom9's Forum BD3D2MK3D v1.17: Convert 3D BDs or MKV to 3D SBS, T&B or Frame-sequential MKV
Register FAQ Calendar Search Today's Posts Mark Forums Read
15th April 2015, 01:23 #341 | Link brochild Registered User Join Date: Apr 2015 Posts: 11 I have Potplayer installed as suggested, it's a much better player, I agree. I can see all the subtitle streams using that player, I can select each one as needed. Last edited by brochild; 15th April 2015 at 03:37.
15th April 2015, 04:00 #342 | Link
Thalyn
Registered User
Join Date: Dec 2011
Posts: 129
Quote:
Originally Posted by r0lZ Please keep us informed, and if the problem occurs again, try to describe exactly what you did. Thanks in advance! (Of course, if you use only FRIMSource now and DGMVCDecode is really the culprit, the problem will probably never occur any more, but who knows?) I'm also interested in results of speed tests, although I don't think that FRIMSource can be really faster than DGMVCSource.
Decided to do a few test runs just to see what happens with BatB.
DGMVCSource using hardware still fails (goes and stays black). However, possibly because I've performed the updates to my available software and drivers, it's now wildly inconsistent as to when exactly it does it.
However, DGMVCSource forced to use software decoding worked perfectly. Due to the aforementioned issue I can't give a speed comparison directly, however it was roughly 85% of the speed of FRIMSource using hardware. Different systems and configurations will obviously be different (my x264 settings are somewhere between Slow and Slower, run by a 4.2GHz 4770K).
Unfortunately, I wasn't able to get a direct comparison of hardware speeds. I ran a shorter encode to get speeds for both but DGMVCSource failed almost at the start, significantly inflating its speed as black frames compress rather quickly. I'll have to go back to something I know DGMVCSource handles fine to do those tests more accurately.
NB This isn't a slight at DGMVCSource or Donald. I have a lot of respect for his work on that plugin.
15th April 2015, 07:19 #343 | Link r0lZ PgcEdit daemon Join Date: Jul 2003 Posts: 7,404 Hum, apparently, it's a problem related to the Intel driver. That means unfortunately that it is not possible to trust the Intel decoder in all cases, but the version currently distributed with BD3D2MK3D should work fine in software mode on all machines. Correct? But if it's really d bug in the drivers, FRIMSource in hardware mode should fail too. Therefore, there is something I don't understand. Anyway, thanks for your tests. @brochild: Thanks for the confirmation. Can I consider your issue as closed? __________________ r0lZ PgcEdit homepage (hosted by VideoHelp) BD3D2MK3D A tool to convert 3D blu-rays to SBS, T&B or FS MKV
15th April 2015, 17:19 #344 | Link
brochild
Registered User
Join Date: Apr 2015
Posts: 11
Quote:
Originally Posted by r0lZ @brochild: Thanks for the confirmation. Can I consider your issue as closed?
Yes, please consider my matter closed.
PS
I'm loving the potplayer.
goodbye PowerDVD
15th April 2015, 17:49 #345 | Link r0lZ PgcEdit daemon Join Date: Jul 2003 Posts: 7,404 Fine. Thanks. BTW, if you adopt PotPlayer, you may want to select Settings -> Output file name -> 3D format extension -> For Bino, PotPlayer... And, in PotPlayer's Preference, go to Video -> 3D video mode, and tick the last two options. It will detect the right format of the 3D files, and display them accordingly automatically. Very handy! Have fun! __________________ r0lZ PgcEdit homepage (hosted by VideoHelp) BD3D2MK3D A tool to convert 3D blu-rays to SBS, T&B or FS MKV
16th April 2015, 01:12 #346 | Link
brochild
Registered User
Join Date: Apr 2015
Posts: 11
Quote:
Originally Posted by r0lZ Fine. Thanks. BTW, if you adopt PotPlayer, you may want to select Settings -> Output file name -> 3D format extension -> For Bino, PotPlayer... And, in PotPlayer's Preference, go to Video -> 3D video mode, and tick the last two options. It will detect the right format of the 3D files, and display them accordingly automatically. Very handy! Have fun!
I cannot get autoplay to work (Windows 7 64 bits) with a blueray.
I added the autoplay by hitting f5 then configured the options.
I tried Windows autoplay settings and potplayer was configured.
It just would not launch when a blue-ray is inserted.
Am I the only one with this issue?
I looked all over the internet for tips - nothing works so far?
16th April 2015, 07:35 #347 | Link r0lZ PgcEdit daemon Join Date: Jul 2003 Posts: 7,404 I have installed PotPlayer in portable mode here, and therefore it has not registered itself for the autoplay. Sorry, but I can't help. There is a lot of freeware programs that can add or edit the autoplay entries, like this one. I have never used such programs, but perhaps they will work for PotPlayer. __________________ r0lZ PgcEdit homepage (hosted by VideoHelp) BD3D2MK3D A tool to convert 3D blu-rays to SBS, T&B or FS MKV
18th April 2015, 08:52 #348 | Link
r0lZ
PgcEdit daemon
Join Date: Jul 2003
Posts: 7,404
Quote:
Originally Posted by r0lZ I will try to find a solution based on the saved palette. But it's without guarantee...
As I wrote above, BDSup2Sub (java and ++) use a fixed 16-colours palette for their conversion to IDX/SUB. By default, they use the internal palette, with a set of reasonable colours. Unfortunately, with some BDs, the default colours are not suitable, and therefore the result looks bad. For example, the shadow of the subtitles may be much more "light" than it should be, or light yellow subtitles are converted to white. That problems have been reported by youli here and by De_Hollander here.
Unfortunately, I can't do much to solve the problem. It is possible to edit the palette to obtain better results, but there are many limitations, and it is impossible to create a palette that will give good results for all BDs. Since it is possible to save the modified palette on disc and to load a previously saved palette from the command line, I have tried to generate the "best" palette, that should work fine with most BDs. I have tried to define 10 different levels of greys (from pure white to pure black) and a few levels of yellow in the remaining slots. But that doesn't work well at all! I have noticed that BDSup2Sub doesn't compare the colours in the palette with the colours of the subtitles to select the best ones, but it assumes always that black is in slot 0, white in slot 1, light grey in slot 2, dark grey in slot 3 and so on. Therefore, if you set, for example, a pure red in slot 1, and white in slot 15, the white subtitles will be converted to red, because BDSup2Sub uses the colours in the slot supposed to contain white, without verifying its content!
As a consequence, it is not possible to define more than 2 shades of greys (plus white and black), and that not sufficient to face all situations that can happen in all BDs. It's really a pity. IMO, the palette should be generated dynamically, according to the content of the source subtitle stream. But it's not the case, and I can't change that. Anyway, due to the difficulties, I have abandoned the idea to generate myself a good palette suitable for the current SUP stream to convert.
However, if you think it is possible to use a better palette for most BDs, you can export a palette, and tells BD3D2MK3D to force BDSup2Sub to use it. It's relatively simple to do. Open BDSup2Sub, go to Edit -> Edit Default DVD Palette, and modify the colours. Do not forget that you should never change completely a colour. For example, to change the yellow, you should modify only the two slots containing the light and dark yellow. Then, export the palette on disc. In BD3D2MK3D, go to tab 2, and add this in the Additional BDSup2Sub Options field:
Code:
--palette-file "path\to\Alt_palette.ini"
If you have to convert a lot of subtitles from the same BD (or set of BDs from the same producer), that may be much more easy than having to fix the colours of each .IDX file after the conversion, as explained earlier.
Take care! BDSup2Sub (java version) and BDSup2Sub++ have the same --palette-file option, but the format of the INI file is DIFFERENT for the two programs! (The java version doesn't include the forst slot in the INI: it is always black and cannot be changed. The ++ version includes it, and therefore all subsequent colours are in different slots!) Therefore, if you change the Settings -> BDSup2Sub option, don't forget to change the palette file too!
Summary: To avoid the problem of the bad colours in your subtitles, you can do one of the following:
• Convert only to BD SUP format. (The price to pay is a less good compatibility with some players.)
• Let BD3D2MK3D do its job (without forcing a specific palette), and then verify if the IDX/SUP file have correct colours, and when it's not the case, change them with the Edit DVD Palette option of BDSup2Sub. You will have to do it for all IDX/SUB file that have been converted. (Note that you can do that during the x264 work, as long as the subtitles are modified before the MKV file is created.)
• Force BDSup2Sub to use a modified palette, better for the subtitles BD3D2MK3D has to convert, with the --palette-file option as explained above. The problem is that you need the subtitles to verify if the palette is good, and therefore this method is recommended only when converting several BDs from the same producer, with similar subtitles.
__________________
r0lZ
PgcEdit homepage (hosted by VideoHelp)
BD3D2MK3D A tool to convert 3D blu-rays to SBS, T&B or FS MKV
18th April 2015, 10:40 #349 | Link De_Hollander Registered User Join Date: Jul 2007 Posts: 55 now I had a few blu-ray which the subtitles color palet was immediately good Jurassic park gave problems with color palet, I, robot, and 300 rise of an empire not Last edited by De_Hollander; 18th April 2015 at 11:30.
18th April 2015, 11:48 #350 | Link r0lZ PgcEdit daemon Join Date: Jul 2003 Posts: 7,404 Yes, I know. It's unpredictable. I am still working on a (possible) solution. When a subtitle stream must be converted to 3D, it is necessary to convert it to XML/PNG format anyway. I can therefore take any PNG in the set of images that has been generated by BDSup2Sub, and analyse it to get its histogram. Then, I can extract the 3 most used colours from the histogram, verify if they form a shade of greys (or perhaps also yellows), and if it's the case, use them to build a palette dynamically, and use that palette to generate the final 3D VobSub file. However, there are still several problems. 1. If the user wants the 2D subtitles in VobSub format, there is no need to convert it to XML/PNG, and therefore I have no PNG file to analyse. (IMO, it will be a waste of time to convert to a temp XML/PNG file anyway, just to analyse the colours.) 2. A single subtitle stream may contain subtitles in different colours. For example, some subtitles may be yellow, and others white. Therefore, picking a single PNG is not sufficient to ensure that all colours necessary to convert the whole stream will be suitable. (And analysing all PNGs is too time consuming and too complex.) 3. Even with "good" colours in the available slots of the palette, there is no guarantee that BDSup2Sub will use them correctly. It may still use them blindly, without knowing that they have changed. So, the result is not guaranteed. However, the method should work relatively well in most cases (when all subtitles of the set use the same black, white and gray or yellow colours). I may add an option to *try* to generate a better palette automatically for the 3D subtitles in VobSub format, but I still have to do numerous tests to be sure that that will give relatively good results, and of course to be sure that the result will never be worse than with the default palette. Currently, I'm still not convinced. __________________ r0lZ PgcEdit homepage (hosted by VideoHelp) BD3D2MK3D A tool to convert 3D blu-rays to SBS, T&B or FS MKV Last edited by r0lZ; 18th April 2015 at 11:50.
18th April 2015, 16:15 #351 | Link De_Hollander Registered User Join Date: Jul 2007 Posts: 55 is there an option in BD3D2mk3d only extract en convert the subs to 3d? otherwise I have to demux all the streams
18th April 2015, 16:23 #352 | Link r0lZ PgcEdit daemon Join Date: Jul 2003 Posts: 7,404 No. You have to demux the MVC video stream and the subs, then extract the 3D-Planes from the MVC stream (with MVCPlanes.exe available in the toolset directory), and finally convert the subs to 3D (with BD3D2MK3D's Tools menu). Or just let it do it automatically when you convert the movie to SBS or T&B. BTW, why do you need a separate option to do that? BD3D2MK3D does it automatically anyway. __________________ r0lZ PgcEdit homepage (hosted by VideoHelp) BD3D2MK3D A tool to convert 3D blu-rays to SBS, T&B or FS MKV
20th April 2015, 08:43 #353 | Link frank Registered User Join Date: Oct 2001 Location: Germany Posts: 793 Generating new sup (Workaround) In difficult cases you can simply generate new subtitles. 1. Convert the subtitle file to text .srt with Subtitle Edit. It's not much work for forced subtitles. 2. Generate new sup file with tsMuxer GUI (tools). Use .srt file as input and demux to .sup. In tsMuxer you can select font, size, color, shift, border pixels... 3. Replace the sup file in BD3D2MK3D. Generate new 3D subtitles. ____ frank Last edited by frank; 20th April 2015 at 08:46.
20th April 2015, 08:58 #354 | Link r0lZ PgcEdit daemon Join Date: Jul 2003 Posts: 7,404 It is also possible to convert the SRT file to SUP directly with Subtitle Edit. The quality is good, although I don't know if there are so many options than with tsMuxer. But anyway, with your method, you'll get 2D subtitles. De_Hollander wants 3D subs. And, for 3D subs, it is highly recommended to use the original SUP streams from the BD as a basis, because the sizes and positions of the subtitles are very important. They are lost when you convert them to SRT, due to the different font and font size and the fact that the positions are not saved in SRT format. And to generate the 3D subs with the correct depth, it is necessary to demux the MVC video stream anyway (to get the 3D-Planes) and it is easy to demux the original SUP streams at the same time. Why not use them directly, without the difficult OCR job for converting them to text format? (The OCR of Subtitle Edit is surprisingly good, but errors are unavoidable, and you have to verify all subtitles anyway.) __________________ r0lZ PgcEdit homepage (hosted by VideoHelp) BD3D2MK3D A tool to convert 3D blu-rays to SBS, T&B or FS MKV
20th April 2015, 09:04 #355 | Link De_Hollander Registered User Join Date: Jul 2007 Posts: 55 @frank No that's not good for me, because your subtitles are not in the correct depth and image positions. And cause a strange image 3d ghosting effect, if the subtitle the popout prevented. Therefore, I would simply like to always use BD3D2MK3D Last edited by De_Hollander; 20th April 2015 at 09:06.
20th April 2015, 11:50 #356 | Link De_Hollander Registered User Join Date: Jul 2007 Posts: 55 What's the best way to remux only the 3d iso to iso or mkv? Demux een seamless branching 3d is a problem with tsmuxer. There is a problem with overlapping like Tangled. can BD3D2MK3D remux ? Last edited by De_Hollander; 20th April 2015 at 12:03.
20th April 2015, 11:57 #357 | Link r0lZ PgcEdit daemon Join Date: Jul 2003 Posts: 7,404 No. BD3D2MK3D, as its name implies, is made to convert a BD3D to MK3D, and nothing else. If you want to do totally other things, please port in the right threads. __________________ r0lZ PgcEdit homepage (hosted by VideoHelp) BD3D2MK3D A tool to convert 3D blu-rays to SBS, T&B or FS MKV
22nd April 2015, 16:32 #358 | Link
frank
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 793
r0lZ:
Quote:
But anyway, with your method, you'll get 2D subtitles.
I know, that's why I wrote for difficult (or special) cases (to add words, correcting syntax, colors, borders,...).
Quote:
Why not use them directly, without the difficult OCR job for converting them to text format?
I normally use it but I had cases where forced subtitles were completely unsatisfactory in my language (e.g. Jurassic Park).
Forced subs are not so many and I can edit every position. Then I only replace the (2D) sup, the rest makes BD3D2MK3D.
Much important: We can create our own srt subtitles and convert into sup with the tools on board of BD3D2MK3D.
Surely this is not for beginners.
Last edited by frank; 22nd April 2015 at 16:37.
22nd April 2015, 16:42 #359 | Link r0lZ PgcEdit daemon Join Date: Jul 2003 Posts: 7,404 Right! Of course, if you want to edit the subtitles, it is much easier to edit a SRT file with a text editor than to edit the original bitmaps. And yes, it is possible to replace the edited subtitles to the right position and with the right depth with the Tools of BD3D2MK3D. But I agree that it's not really easy. I should write a guide to explain how to add external subtitles (or edit an existing stream as you suggest) to the 3D MKV, with (more or less) correct positions and depths. But I haven't much time, and I'm not sure there are many peoples interested in that job. __________________ r0lZ PgcEdit homepage (hosted by VideoHelp) BD3D2MK3D A tool to convert 3D blu-rays to SBS, T&B or FS MKV
26th April 2015, 10:49 #360 | Link sambal Registered User Join Date: Sep 2011 Location: Amsterdam, Netherlands Posts: 8 Request chapter option Recently discovered your program and I love it! Just the thing I've been looking for since I got interest in 3D, about 3 years ago. Especially the 3D subs are very good, I used to use several programs to accomplish this task, including Photoshop to make 3D-T&B out of 2D. Your way is much easier and less time consuming. I would like to ask you to implement an option to only remux a chapter instead of the whole BD. In my case it would mainly be for test purposes, but others might have other uses for it.
|
2022-05-25 12:22:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4731578528881073, "perplexity": 3749.472382797917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662587158.57/warc/CC-MAIN-20220525120449-20220525150449-00117.warc.gz"}
|
https://brilliant.org/problems/a-problem-by-swapnil-das-4/
|
# 1 equation 3 variables
Algebra Level 2
$\large 2^{x} = 3^{y} = 12^{z}$
If the equation above is fulfilled for non-zero values of $$x,y,z,$$ find the value of $$\frac { z(x+2y) }{ xy }$$.
×
Problem Loading...
Note Loading...
Set Loading...
|
2017-03-27 04:54:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5997189879417419, "perplexity": 4275.542334077438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189403.13/warc/CC-MAIN-20170322212949-00446-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/526100/algebraic-numbers-that-cannot-be-expressed-using-integers-and-elementary-functio/526382
|
# Algebraic numbers that cannot be expressed using integers and elementary functions
Can we give an explicit${^*}$ example of a real algebraic number that provably cannot be represented as an expression built from integers and elementary${^{**}}$ functions only?
${^*}$ explicit means we can write down a polynomial equation with integer coefficients having the algebraic number as a root, and an interval with rational bounds that isolates that root.
${^{**}}$ an expression built from integers and elementary function only means any valid expression in the set of elementary expressions $\mathcal{E}$ (as defined in that question at MO). Briefly, it is any finite combination of the following:
• the imaginary unit $i$,
• the exponent $x\mapsto e^x$,
• the principal branch of the natural logarithm $x\mapsto\ln x$, provided $x\ne0$, and
• the multiplication function $(x,y)\mapsto x\cdot y$.
Note that it allows to express constants $\pi$, $e$, integers, rationals, sums, powers, radicals, and also trigonometric and hyperbolic functions and their inverses, e.g. $$\pi=i\cdot i\cdot i\cdot \ln(i\cdot i).$$
Update: I reposted this question at MO.
• If the field of numbers that can be expressed in terms of integers and elementary functions is algebraically closed, then the answer is no Oct 14 '13 at 18:05
• Well, that is to say, for some subset of elementary functions and their compositions. I.e. things like $\cos ((p/q) \arctan (a/b))$ like what you wrote. If any such set is algebraically closed then the answer is no. Oct 14 '13 at 18:16
• My understanding is that the background to Hilbert's 13th Problem was a result that the general sixth degree polynomial's roots cannot be expressed in terms of functions of one argument. I'll try to find a reference, but I take it your "elementary functions" are of one argument. Oct 14 '13 at 18:37
• @hardmath The solution to Hilbert's $13^{th}$ Problem found by Kolmogorov and Arnold states that 2-argument functions are sufficient to solve algebraic equations of $7^{th}$ degree. For example, addition, multiplication and raising an expression to a power are all 2-argument functions. Oct 14 '13 at 18:51
• It wouldn't hurt to be more explicit about what you mean by 'elementary functions' - in particular, whether you just mean the exp-based functions (cos, sin, arctan, log, etc.) or whether you mean to explicitly allow e.g. things like hypergeometrics. Also, the Kolmogorov/Arnold result is not, AFAIK, speaking specifically in terms of elementary functions and so may not be (entirely) germane here. Oct 14 '13 at 19:48
This may not be what your are looking for but, after some tinkering, I found your example in fact can be expressed in radicals. Let,
$$x = 2\cos \frac{2\arctan k}{5}$$
then $x$ is a root of,
$$x^5-5x^3+5x+2\left(\frac{k^2-1}{k^2+1}\right) = 0$$
This is the DeMoivre quintic in disguise,
$$x^5+5ax^3+5a^2x+b=0$$
and is solvable in radicals. Your $\alpha$ then has the radical expression,
$$\alpha = 2\cos \frac{2\arctan 2}{5} =\left(\frac{-3-4i}{5}\right)^{1/5}+\left(\frac{-3+4i}{5}\right)^{1/5} = 1.807059\dots$$
• Thanks! I fixed my question. I hope the new example should work. Welcome to try to prove me wrong this time as well :) Oct 14 '13 at 19:51
• @VladimirReshetnikov: Actually, it is as well. :) All $2\cos\frac{2\arctan k}{n}$ can be expressed in radicals. The new one is just $$2\cos \frac{2\arctan 2}{7} =\left(\frac{-3-4i}{5}\right)^{1/7}+\left(\frac{-3+4i}{5}\right)^{1/7} = 1.900768\dots$$ Oct 14 '13 at 19:58
• Oops... I need to think deeper to find a real example. Oct 14 '13 at 20:18
If I'm not mistaken, this is a profound problem and little is known about it. Some years back there was an article in the American Mathematical Monthly by Timothy Chow, What is a Closed-Form Number? (pdf here). I believe what Chow calls EL numbers are the same as the numbers you have identified.
This area is closely connected with Schanuel's conjecture. Chow proves one conditional result that is relevant here:
• If Schanuel's conjecture is true, then the algebraic numbers $\alpha$ belonging to the class EL are precisely those whose equations are solvable over $\mathbb{Q}$ (meaning the Galois group of the splitting field of the minimal polynomial for $\alpha$ is solvable).
See corollary 1 on page 444 of Chow's paper. I'm not sure a single explicit example that answers your question is known, although I'd be absolutely delighted to be shown otherwise.
• That PDF is interesting but I only have a vague understanding of the terminology. Are the terms "transcendence degree", "fields over $\mathbb{Q}$", and "irreducible polynomial" topics that would be covered in undergrad algebra?
– user18862
Oct 16 '13 at 23:14
• Definitely the terms "fields over $\mathbb{Q}$" and "irreducible polynomial" would be covered in a standard undergraduate algebra course that covers the elements of Galois theory. "Transcendence degree" might be a different story, but the concept itself is not unduly difficult: it is the maximal number of algebraically independent elements in a field extension, very analogous to how the dimension of a vector space is the maximal number of linearly independent elements. All of this would certainly be covered in a respectable graduate algebra book like Lang's. Oct 16 '13 at 23:25
• user43208 is right. The elementary numbers (see Wikipedia) can be generated by applying elementary functions to the rational numbers. Chow's numbers EL are the explicit elementary numbers.
– IV_
Aug 21 at 17:31
|
2021-10-20 22:22:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7040746808052063, "perplexity": 375.9063402895854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585353.52/warc/CC-MAIN-20211020214358-20211021004358-00337.warc.gz"}
|
https://www.techwhiff.com/learn/what-affects-weather/300774
|
# What affects weather?
###### Question:
What affects weather?
#### Similar Solved Questions
##### You are instructed to create 400. mL of a 0.39 M phosphate buffer with a pH...
You are instructed to create 400. mL of a 0.39 M phosphate buffer with a pH of 6.2. You have phosphoric acid and the sodium salts NaH2PO4, Na2HPO4, and Na3PO4 available. (Enter all numerical answers to three significant figures.) H3PO4(s) + H2O(l) = H30+ (aq) + H2P04 (aq) Kai = 6.9 x 10-3 H2PO4 (aq)...
##### A cat walks from the neighbors house to your house which is 15.0 m in 45...
A cat walks from the neighbors house to your house which is 15.0 m in 45 sec. What is the cat's speed? Select one: a. 675 m/s b. 3.00 m/s c. 0.33 m/s d. 0.944 m/s A ball with a horizontal speed of 1.5 m/s rolls off a bench that is 2.2 m high. How long will it take to reach the floor and how far ...
##### A system is formed by cascading two systems as shown in the figure given below. Given...
A system is formed by cascading two systems as shown in the figure given below. Given that the impulse responses of the systems are, h1(0) = 22e-* u(1), h₂( = e ult) -40 hi(t) h2(t) Vo References eBook & Resources Section Break Difficulty: Medium Learning Objective: Un into the time domain...
##### A) A 35.00 mL of 0.25 M HF is titrated with a 57.00 mL of standardize...
a) A 35.00 mL of 0.25 M HF is titrated with a 57.00 mL of standardize 0.1535 M solution of NaOH at 25°C. Calculate the pH at the equivalence point. Ka for HF = 6.76 x 10-4. (8.5 marks) b) What is the pH at 25°C of the solution obtained by dissolving an aspirin tablet in 0.500 L of water? The...
##### What is the arc length of f(x)= 1/x on x in [1,2] ?
What is the arc length of f(x)= 1/x on x in [1,2] ?...
##### What are the components of the vector between the origin and the polar coordinate (4, (17pi)/12)?
What are the components of the vector between the origin and the polar coordinate (4, (17pi)/12)?...
|
2022-09-29 11:42:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5817830562591553, "perplexity": 3190.8714025868458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00168.warc.gz"}
|
http://math.stackexchange.com/questions/50947/product-of-combinations-is-probability
|
Product of Combinations is Probability?
For atomic orbitals:
E2 orb: $\binom{N-n_1}{n_2}$
E2 orb: $\binom{N-n_1-n_2}{n_3}$
E2 orb: $\binom{N-n_1-n_2-n_3}{n_4}$
...
En orb: $\binom{n_i}{n_i}$
now probability function is:
$P= N! \prod^{n}_{n=1}\frac{1}{n_{i}!}$
Why? In general?
[Update]
Every combination is greater than 1. So their product is greater than 1. How on earth can such multiplication lead to a probability function? Is the probability function scaled back to range $[0,1]$?
-
Maybe it would be a good idea to reproduce the relevant passages. I find your question highly unclear. – t.b. Jul 11 '11 at 23:13
Or \binom{n}{k}. – Dylan Moreland Jul 12 '11 at 0:26
BOUNTY: People are overlooking the last assertion that the product is a probability function. I understand the term "probability function" in a such way that you give some input and then you get some output in some range usually scaled to $[0,1]$. I am totally lost with this. Even multinomial distribution does not lead to results in the range $[0,1]$. I will reward the bounty to a person who can sort out the last statement. What is the probability function? Why is it probability function? By which definition is it probability function? Please, vote this up so people see it. – hhh Aug 29 '11 at 20:35
@Didier Piau: please, read the page 10 on the lecture slides here, alert not in English but a lot of pictures. I am lost with it, why it would become a probability function -- the question here is pretty much explaining the page. The page deduce the probability function with multinomial ditribution (my understanding) but I cannot see how it can be right. It is apparently scaling the probability function to changing range, I am lost now what they mean here. Or I am just misunderstanding the term "todennakoisyssfunktio". Dictionary.... – hhh Aug 29 '11 at 21:12
@Theo, All: Very little to add. You seem to have gotten the point that the author seems to take certain liberties in using mathematical concepts. Note that in addition to this not being a probability function he freely differentiates w.r.t to a parameter ranging over natural numbers :-). Live with it. At times physicists do not give definitions, they give descriptions. Here the description is statistical. If $N$ is in the range of Avogadro's constant ($10^{23}$) it somehow works. In other words: the OP is experiencing a "culture shock". Try to get the idea, and don't get stuck in the language. – Jyrki Lahtonen Aug 30 '11 at 4:27
As you correctly point out, the terminology on page 10 of the lecture notes you linked to is incorrect: $P$ is the number of microstates making up the given macrostate; it is not a probability, since it typically exceeds one.
However, dividing $P$ by the total number of microstates $n^N$ does give the probability of the given macrostate, under the assumption that all microstates are a priori equally likely. Since the constant $1/n^N$ is the same for all macrostates of the system, we may safely ignore it, as the author of the lecture notes does, when comparing the probabilities of different macrostates.
(If I were reviewing those notes, I'd also point out that the author seems rather excessively fond of the letter "n". Having $n$ and $N$ as system parameters is confusing enough, but when he then introduces the macrostate parameters $n_1$, $n_2$ and up to $n_n$...)
-
Physicists are not as rigorous with their notation sometimes. I referred to some old notes, and at some places, this was called "unnormalized" probability. – kuch nahi Aug 30 '11 at 3:09
Yeah, the notation is awful. I have seen definitions of functions like $F(t)=\int_0^t f(t)\,dt$ in a chemistry book. The intention is clear (in the context at least), but my calculus students were reading that book :-). IOW don't hold your breath, if you expect the author to understand this bit of criticism. – Jyrki Lahtonen Aug 30 '11 at 4:41
Ilmari, @Jyrki: Thanks a lot for following up on my pings. I removed the off-topic comments to your answers. – t.b. Aug 30 '11 at 10:54
As André Nicolas shows, $$\binom{N}{n_1}\binom{N-n_1}{n_2}\dots\binom{N-n_1-n_2-\dots-n_{k-1}}{n_k}=N!\prod_{i=1}^k\frac{1}{n_i!}$$ This is an integer; it is the number of ways to arrange $N$ distinct things into $k$ bins with $n_i$ things in bin $i$. If you sum this over all possibilities for the $\{n_i\}$, you get $k^N$ (the number of maps from $N$ things to $k$ bins). So you would get a probability distribution if you divide by $k^N$.
The multinomial theorem states $$\left(\sum_{i=1}^kx_i\right)^N=\sum_{\sum_{i=1}^kn_i=N}N!\prod_{i=1}^k\frac{x_i^{n_i}}{n_i!}$$ Setting $x_i=1$ for $1\le i\le k$, we get that $$k^N=\sum_{\sum_{i=1}^kn_i=N}N!\prod_{i=1}^k\frac{1}{n_i!}$$
-
you are right here if you divide by $k^{N}$ but the assertion is that you do not divide by $k^{N}$, it is the score of this problem. The statement has no such scaling. +1 because I think you are seeing the main problem here, hopefully other people can focus on this. – hhh Aug 29 '11 at 21:27
...perhaps too much asked but I want to prove this rigorously. Very well, we have here symmetric polynomials. $(x_{1}+x_{2} +...+x_{k})^{2} = (X_{i}^{2} + 2(X_{i}X_{j}))$ where large $X$ is a generator thing (my own terminology, noticed in some very old Vaisala Number theory book or something like that, trying to find proper terminology...). And again multiplication, there must be some rules from number theory to kill this problem in a few second. – hhh Aug 29 '11 at 21:48
@hhh: It seems that an assertion is being made about the probabilistic distribution of the of states (counted as integers), from which a probability will be deduced (after dividing by the total). Since they are taking the derivative of the logarithm, any constant factor drops out. Perhaps the author is taking liberties with the term "probability function" or perhaps it loses something in the translation, but I think the final result is still valid. – robjohn Aug 29 '11 at 21:52
you are clever! Very misleading and dangerous statement, it is not the probability function at that point per se -- only the amount of occurrences. But I still want to rigorously understand your last part...calculating. – hhh Aug 29 '11 at 21:56
@hhh: You don't need to dig for Väisälä's number theory book (no need to leave out diacritical marks from letters, we have left the era of 7-bit ASCII here). You get the multinomial formula from the binomial formula using induction on the number of unknowns. Robjohn hits the nail on the head: the constant $1/N!$ disappears at the end of the day. – Jyrki Lahtonen Aug 30 '11 at 4:39
Before the question is closed, I would like to give you a start.
Suppose $a+b+c+d=N$. Let us look at $$\binom{N}{a}\binom{N-a}{b}\binom{N-a-b}{c}\binom{N-a-b-c}{d}$$ (the last term is $1$, it is just there to make things look nice.)
Calculate, using the usual formula for $\binom{n}{k}$.
The first term is $$\frac{N!}{a!(N-a)!}.$$
The second term is
$$\frac{(N-a)!}{b!(N-a-b)!}.$$
The third term is
$$\frac{(N-a-b)!}{c!(N-a-b-c)!}.$$
Note that $N-a-b-c=d$.
Multiply, and observe the very nice cancellations! We get
$$\frac{N!}{a!b!c!d!}.$$
The "general" case solution is basically the same, except that all those subscripts tend to make things less obvious.
Added: By "the usual formula" for $\binom{n}{k}$ I mean
$$\binom{n}{k}=\frac{n!}{k!(n-k)!}.$$
Since the question has not yet been closed, I am adding a link to a Wikipedia entry which I think is quite well written, and which I hope you will give you all of the additional information you may need.
-
user6312: do I understand right that the probability function is just a multinomial distribution $P=\binom{N}{k_{1},k_{2},...,k_{m}}$? – hhh Jul 12 '11 at 9:56
There is no sense with it! It must be scaled to $[0,1]$ or you are using some different definition of probability function, anyway this is the assertion my lecture slides offer. – hhh Aug 29 '11 at 20:30
Naturally, as you observe, these are not probabilities, and any reference that calls them a probability function is just wrong. If you know the probabilities $p_1$, $p_2$, and so on up to $p_k$ (where $p_j$ is the probability of being in state $j$), then after $N$ trials, probability of $n_1$ in state $1$, $n_2$ in state $2$, and so on up to $n_k$ in state $k$ is $\frac{N!}{n_1!\cdots n_k!}p_1^{n_1}\cdots p_n^{n_k}$. That conceivably may be what you want, but I have a feeling it is not. (to be continued) – André Nicolas Aug 29 '11 at 21:43
(continued) More likely, use the fact that the sum of all these numbers, as $n_1$, $n_2$, and so on up to $n_k$ range over all choices with $n_1+\cdots+n_k=N$, is $k^N$. So if you divide each of the numbers we have been working with by $k^N$, you really do get probabilities. (This is the "equally likely" case of the multinomial distribution that I gave the formula for in the previous comment.) So divide all your numbers by $k^N$. – André Nicolas Aug 29 '11 at 21:46
@hhh: I forgot to start the above comments properly. Don't know how exactly the system works, and whether they are automatically sent to you. – André Nicolas Aug 29 '11 at 23:41
The "probability function" in the notes is usually referred to as the number of microstates $\Omega$ (I was taught this way). By the postulate of equal a priori probability, the probability, of a particular macrostate $i$ is $$p_i = \frac{1}{\Omega}$$So the probability function is different from the actual probability, which is probably what caused the confusion.
Hence the $P$ in your notation is the number of ways $N$ particles can be classified into energy levels according to their energies (by the product rule, as your instructor and Andre above has done). Note that the analysis is classical as it assumes the particles filling the energy levels are distinct (come with labels).
You can find the same method in the Wikipedia article on Maxwell Boltzmann Distribution, except that it does not use confusing terminology of "Probability function".
$$\Omega = N!\prod \frac{g_i^{N_i}}{N_i!}$$ Where on your case all $g_i = 1$ as there are no degenerate levels involved. Your instructor derives the MB distribution by trying to maximize the above number (as a system naturally seeks the maximum number of microstates) given the constraints (no exchange of particles or energy of the system with the surroundings) using Lagrange Multipliers.
-
|
2016-06-25 23:39:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.886613130569458, "perplexity": 330.9257093916687}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00169-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://indico.cern.ch/event/218030/contributions/450774/
|
# EPS HEP 2013 Stockholm
17-24 July 2013
KTH and Stockholm University Campus
Europe/Stockholm timezone
## SNO+
20 Jul 2013, 11:00
15m
E2 (KTH Campus)
### E2
#### KTH Campus
Talk presentation Neutrino Physics
### Speaker
Matthew Mottram (University of Sussex)
### Description
One of the most important open questions in neutrino physics is the question of whether neutrinos are Majorana or Dirac particles. Attempts to detect the (possible) Majorana nature of neutrinos focus around the double beta decay process. If neutrinoless double beta decay were observed, it would not only prove that neutrinos are Majorana particles, but it would also provide a measurement of the neutrino mass. Loading the double beta decay isotope 130Te into the SNO+ liquid scintillator has the potential to allow for an extremely powerful double beta decay search. We have developed a brand new technique to do this, and to remove other contaminants that might otherwise interfere with the measurement. Although the energy resolution of the detector will not be as good as that of other existing experiments, the amount of isotope that could be suspended in the scintillator is very large. This means that SNO+ can hope to see a large enough number of neutrinoless double beta decay events that we can fit to the energy spectra of the 2 neutrino and 0 neutrino signals (and those of the radioactive backgrounds), making us much less dependent on energy resolution than competing experiments. In fact, based on some preliminary simulations, if we loaded the scintillator with 0.3% natural Te (which would contain 800kg of 130Te isotope) we would be able to detect neutrinoless double beta decay at neutrino masses approaching the range of the “inverted hierarchy,” a particularly interesting regime for theoretical predictions related to one of two possible ways for the 3 neutrino masses to be ordered. A 3% loading, corresponding to 8 tons of 130Te isotope in the detector, would give us the potential to probe the majority of this interesting range with high sensitivity.
### Primary author
Matthew MOTTRAM (U. of Sussex)
Slides
|
2020-11-01 02:21:51
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8333556652069092, "perplexity": 1243.6538516475607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107922746.99/warc/CC-MAIN-20201101001251-20201101031251-00554.warc.gz"}
|
https://www.physicsforums.com/threads/what-is-this-integral.760277/
|
# What is this integral
What is this integral
$\int\left(\frac{\mathrm{arcsinh}(ax)}{ax}\right)^{b}dx$
where a and b are constants.
pasmith
Homework Helper
The substitution $ax = \sinh t$ yields $$\int \left(\frac{\mathrm{arcsinh}(ax)}{ax}\right)^b\,dx = \int \left(\frac{t}{\sinh t}\right)^b \frac{\cosh t}{a}\,dt \\ = \left[ \frac{1}{a(1-b)}\frac{t^b}{(\sinh t)^{b-1}}\right] + \frac{b}{a(b - 1)} \int \left(\frac{t}{\sinh t}\right)^{b-1}\,dt \\$$ on integration by parts. Unfortunately that seems to be as far as one can get.
Philip Wood
Gold Member
The wonderful Wolfram online integrator can't do it, so there's not much hope...
I confirm, Mathematica replies: "no result found in terms of standard mathematical functions" which is true in most cases.
TheDemx27
Gold Member
Just starting with Mathematica, I type in:
Code:
Integrate[((ArcSinh[a * x])/ a * x)^b, x]
and I get out:
Code:
\[Integral]((x ArcSinh[a x])/a)^b \[DifferentialD]x
Is there some reason I am getting a different output?
|
2021-08-01 17:32:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48433631658554077, "perplexity": 2544.925587066538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154214.63/warc/CC-MAIN-20210801154943-20210801184943-00033.warc.gz"}
|
http://dev.eclipse.org/mhonarc/lists/aspectj-users/msg04784.html
|
Re: [aspectj-users] newbie needs help configuring
• From: Alexandre Vasseur <avasseur@xxxxxxxxx>
• Date: Fri, 14 Oct 2005 18:38:23 +0200
• Delivered-to: aspectj-users@eclipse.org
Here are the details:
Given an @Aspect aspect, as you can see in your own code there is no
aspectOf() method (as there are implicit ones in the non @Aspect
aspectj syntax). This method is wether
- injected into the compiled aspect if you compile this one with ajc
- injected into the javac compiled aspect during load time weaving
In both case all that is transparent to you.
so it looks like you have hit some issue with the second option as
switching to the 1st one shows that your configuration and aspect is
valid.
I would appreciate if you could describe where you package the javac
compile aspect (alongside the webapp or in some other level like fe.
tomcat shared classes) or if you could try to narrow down the issue
outside of a tomcat.
Alex
On 10/14/05, Andy Kriger <andy.kriger@xxxxxxxxx> wrote:
> And finally, I used ajc to compile my source and run the same test and
> it worked. So maybe a combination of Annotations and load-time weaving
> is causing the problem?
>
> On 10/14/05, Andy Kriger <andy.kriger@xxxxxxxxx> wrote:
> > When I try the call outside the web application, I get this error...
> > [junit] Testcase: testAOP(com.myco.AOPTest): Caused an ERROR
> > [junit] com.myco.MyAspect.aspectOf()Lcom/myco/MyAspect;
> > [junit] java.lang.NoSuchMethodError:
> > com.myco.MyAspect.aspectOf()Lcom/myco/MyAspect;
> >
> > That looks pretty significant - any idea what it means? Have I missed
> > something in implementing my Aspect or the way I'm using the
> > Annotations?
> >
> > On 10/14/05, Andy Kriger <andy.kriger@xxxxxxxxx> wrote:
> > > 1. Yep, using Java 5
> > > 2. Haven't tried static weaving or logging outside the webapp/SOAP
> > > call - I'll try those and post a followup.
> > > 3. Here's what I see in the LTW log...
> > > info weaving 'com/myco/Service'
> > > info weaver operating in reweavable mode. Need to verify any required
> > > types exist.
> > > weaveinfo Join point 'method-execution(boolean
> > > com.myco.Service.isXml(java.lang.String))' in Type 'com.myco.Service'
> > > (MyAspect.java)
> > > weaveinfo Join point 'method-execution(java.lang.String
> > > com.myco.Service.generateXML(java.lang.String))' in Type
> > > 'com.myco.MyAspect' (MyAspect.java)
> > > etc etc etc for all the methods in Service
> > >
> > > I never see it reach a point where AspectJ logs a message about a
> > > Service method call or " info weaving 'com/myco/MyAspect' ".
> > >
> > > Thanks for the help,
> > > Andy
> > >
> > > On 10/14/05, Matthew Webster <matthew_webster@xxxxxxxxxx> wrote:
> > > >
> > > > Andy,
> > > >
> > > > Hopefully these questions won't seem too silly:
> > > > 1. The @AspectJ syntax requires Java 5 so I assume you are using that to run
> > > > AXIS?
> > > > 2. Have you tried static weaving your application and testing it either
> > > > outside (JUnit) or inside AXIS that way?
> > > > 3. Could you post the LTW log or a least the interesting part of it?
> > > >
> > > > I have successfylly tried you testcase with my own simple Service class:
> > > >
> > > >
> > > > info weaving 'com/myco/Service'
> > > > info weaver operating in reweavable mode. Need to verify any required types
> > > > exist.
> > > > weaveinfo Join point 'method-execution(void
> > > > com.myco.Service.main(java.lang.String[]))' in Type 'com.myco.Service'
> > > > (MyAspect.java)
> > > > Service.main()
> > > > info weaving 'com/myco/MyAspect'
> > > > info weaver operating in reweavable mode. Need to verify any required types
> > > > exist.
> > > > info processing reweavable type com.myco.MyAspect: com\myco\MyAspect.java
> > > > info successfully verified type com.myco.MyAspect exists. Originates from
> > > > com\myco\MyAspect.java
> > > > Aspect logXyPath was called
> > > >
> > > > Cheers
> > > >
> > > > Matthew Webster
> > > > AOSD Project
> > > > Java Technology Centre, MP146
> > > > IBM Hursley Park, Winchester, SO21 2JN, England
> > > > Telephone: +44 196 2816139 (external) 246139 (internal)
> > > > Email: Matthew Webster/UK/IBM @ IBMGB, matthew_webster@xxxxxxxxxx
> > > > http://w3.hursley.ibm.com/~websterm/
> > > >
> > > > Please respond to aspectj-users@xxxxxxxxxxx
> > > >
> > > > Sent by: aspectj-users-bounces@xxxxxxxxxxx
> > > >
> > > > To: aspectj-users@xxxxxxxxxxx
> > > > cc:
> > > > Subject: Re: [aspectj-users] newbie needs help configuring
> > > >
> > > >
> > > > The stack trace isn't really meaningful since the only trace I see is
> > > > an Axis SOAP Fault that wraps (and masks) the exception being thrown
> > > > from within the web application. I cannot see where the trace
> > > > originates only really the InvocationTargetException. There's nothing
> > > > in the logs that indicates what problem, if any, AspectJ is having. I
> > > > already have the verbose switch in the aop.xml. If there's a way to
> > > > turn on more logging, please let me know.
> > > >
> > > > On 10/13/05, Alexandru Popescu
> > > > <the.mindstorm.mailinglist@xxxxxxxxx> wrote:
> > > > > #: Andy Kriger changed the world a bit at a time by saying on 10/13/2005
> > > > 11:06 PM :#
> > > > > > I am trying to use load-time weaving in AspectJ 1.5M4 to log our web
> > > > > > service running on Axis 1.2 in Tomcat 5.8. Right now I'm trying to get
> > > > > > a very simple proof-of-concept working. I make calls to web service
> > > > > > methods expecting to see logging to stdout and instead I keep running
> > > > > > into InvocationTargetExceptions. I've included my config below.
> > > > > >
> > > > > > If I comment out the Advice part of MyAspect, everything works fine.
> > > > > > I've tried @Before as well as @After - no luck there. I do see
> > > > > > "weaveinfo Join point..." info in the logs, so things look like they
> > > > > > are being woven. Logging in my code shows everything working through
> > > > > > the service method being invoked and then mysteriously throwing the
> > > > > > InvocationTargetEception. I can only guess that it's coming from the
> > > > > > Advice. I've also tried applying the advice to the class invoked by
> > > > > > the service (in case there's some kind of reflection effect from Axis)
> > > > > > but I still see the same problem.
> > > > > >
> > > > > > I really want to show my boss that AOP is valid for our project but
> > > > > > right now I'm dead in the water. Can someone can help me figure out
> > > > > > what's going on?
> > > > > >
> > > > > > Thanks in advance,
> > > > > > Andy
> > > > > >
> > > > > > Tomcat is configured to run with the JVM opt
> > > > > >
> > > > -javaagent:/usr/local/tomcat/shared/lib/aspectjweaver.jar
> > > > > > and shared/lib contains the lib/*.jar files from the AspectJ distro
> > > > > >
> > > > > > Here's my aop.xml
> > > > > >
> > > > > > <aspectj>
> > > > > > <aspects>
> > > > > > <aspect name="com.myco.MyAspect"/>
> > > > > > </aspects>
> > > > > > <weaver options="-verbose -showWeaveInfo">
> > > > > > <include within="com.myco.*"/>
> > > > > > </weaver>
> > > > > > </aspectj>
> > > > > >
> > > > > > Here's my aspect
> > > > > >
> > > > > > package com.myco;
> > > > > >
> > > > > > @Aspect
> > > > > > public class MyAspect
> > > > > > {
> > > > > >
> > > > > > // on any call to our service
> > > > > > @Pointcut("execution( public * com.myco.Service.*(..) )")
> > > > > > void csCall() {}
> > > > > >
> > > > > > // log something
> > > > > > @After("csCall()")
> > > > > > public void logPath()
> > > > > > {
> > > > > > System.out.println("Aspect logXyPath was called");
> > > > > > }
> > > > > >
> > > > > > }
> > > > >
> > > > > Can you add to the aboves the stacktrace you are getting?
> > > > >
> > > > > ./alex
> > > > > --
> > > > > .w( the_mindstorm )p.
> > > > >
> > > > > _______________________________________________
> > > > > aspectj-users mailing list
> > > > > aspectj-users@xxxxxxxxxxx
> > > > > https://dev.eclipse.org/mailman/listinfo/aspectj-users
> > > > >
> > > > _______________________________________________
> > > > aspectj-users mailing list
> > > > aspectj-users@xxxxxxxxxxx
> > > > https://dev.eclipse.org/mailman/listinfo/aspectj-users
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > _______________________________________________
> > > > aspectj-users mailing list
> > > > aspectj-users@xxxxxxxxxxx
> > > > https://dev.eclipse.org/mailman/listinfo/aspectj-users
> > > >
> > > >
> > > >
> > >
> >
> _______________________________________________
> aspectj-users mailing list
> aspectj-users@xxxxxxxxxxx
> https://dev.eclipse.org/mailman/listinfo/aspectj-users
>
|
2015-08-01 09:48:03
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8376091122627258, "perplexity": 4297.432205221143}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988650.53/warc/CC-MAIN-20150728002308-00073-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://stats.stackexchange.com/questions/169986/meaning-of-intercept-and-what-the-intercept-should-be-with-no-measurement-error
|
# Meaning of Intercept and what the intercept should be with no measurement error?
I'm going into University this year, Engineering to be more specific and I was given an assignment over the summer about regression. (Something I have no knowledge about) Basically, in my questions I have two questions that I have no idea about answering. Here they are...
1. I need to describe the significance of the intercept. In my thing on excel I have a few different kinds from what I can see. Theres an image attached so you can see. I have things like "Coefficient, t Stat, P-value..." Whats the significance of the intercept? For this assignment its basically about finding the Total Energy, Total Charge, Power, and I^2 (By the way, what would you call I^2?) from the elapsed time, Voltage, and Current in a series circuit. I highlighted in blue the intercept and standard error in the image
2. This one is hard, I cannot find anything to help me...What should the intercept be if there are no measurement errors?
Please any help will be appreciated!
• You should add the self-study tag as this is homework. Have you read any simple explanations of linear regression, such as this? Have you plotted the data used for each regression to see what they look like? That might help illuminate these concepts. – EdM Sep 3 '15 at 21:50
There are two relationships in mathematics that unfortunately have their terminology often mixed up. A linear relationship is governed by an equation of the form:
$$y = a \cdot x.$$
An affine relationship is governed by an equation of the form:
$$y = a \cdot x + b.$$
The intercept in the affine equation is $b$. Of course, if $y$ has a linear relationship with $x$, it also has an affine relationship with $x$ (with $b$ = 0). Certain things in life/nature truly have linear relationships, for example,
• $y =$ length of an object measured in inches
• $x =$ length of an object measured in feet
$$y = 12 \cdot x.$$
Unfortunately, linear regression typically refers to finding the affine relationship which "best" describes the data. If we had some error in our measurements, and ran a linear regression, we might get results like:
$$y = 12.1 \cdot x - 1.25.$$
I hope this was helpful. I didn't want to address the question too directly. Let me know if you have any questions.
PS: Did it say statistical significance or just significance in (1)?
• Thank you for taking the time to answer, it asked for "physical significance" if that helps – John Sep 3 '15 at 16:39
• I'm assuming it has to be with the series circuit itself. So what significance it would have on the circuit but that seems weird. (But I may be wrong) – John Sep 3 '15 at 16:41
|
2020-02-18 08:02:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6312726736068726, "perplexity": 504.4883091331445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143635.54/warc/CC-MAIN-20200218055414-20200218085414-00303.warc.gz"}
|
https://socratic.org/questions/how-do-e-and-z-isomers-arise-in-molecules
|
# How do E and Z isomers arise in molecules?
Then teach the underlying concepts
Don't copy without citing sources
preview
?
#### Explanation
Explain in detail...
#### Explanation:
I want someone to double check my answer
2
anor277 Share
Dec 13, 2016
Structural isomerism derives from different connectivity for a given formula.
#### Explanation:
Geometric isomerism presumes the SAME structural isomerism, i.e. the same connectivity, however different geometry for the same structure. Organic chemistry provides rich examples of structural and geometric isomerism, and $E , Z$ or $\text{cis/trans}$ isomerism provides many instances.
Consider $\text{2-butylene}$, ${H}_{3} C - C H = C H - C {H}_{3}$; this is a simple organic structure that can nevertheless generate 2 geometric isomers as shown.
For both isomers, connectivity IS THE SAME. $C 1$ connects to $C 2$ to $C 3$ to $C 4$. Nevertheless, because of the different geometry, the isomeric butenes have different structures, and chemical properties.
• 22 minutes ago
• 26 minutes ago
• 26 minutes ago
• 30 minutes ago
• A minute ago
• 6 minutes ago
• 7 minutes ago
• 11 minutes ago
• 13 minutes ago
• 13 minutes ago
• 22 minutes ago
• 26 minutes ago
• 26 minutes ago
• 30 minutes ago
|
2018-06-21 01:05:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5558932423591614, "perplexity": 8491.779869261141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863980.55/warc/CC-MAIN-20180621001211-20180621021211-00506.warc.gz"}
|
https://math.stackexchange.com/tags/intuition/hot
|
# Tag Info
7
As you noted, $H$ is always a closed unter multiplication, thus $H \subseteq G$ is a submonoid. The only way for $H$ to fail to be a subgroup is for it to fail to contain an inverse for one of its elements. Thus, to find a counterexample we have to ensure that no argument of the form $$gAg^{-1} \subseteq A \implies g^{-1}Ag \subseteq A$$ holds. As you ...
7
Since$$\sin(x)=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots,$$you know that$$\sin(x)+x=2x-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots$$and that$$\sin(x)-x=-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots.$$Therefore both limits$$\lim_{x\to0}\frac{\sin(x)+x}x\text{ and }\lim_{x\to0}\frac{\sin(x)-x}{x^3}$$exist; they are equal to $2$ and to $-\frac16$ respectively. This explains why ...
5
Suppose you start with $\$1000$and play$100$games where you bet$\$1$ on each win. That's a fair game, so the expected value you end up with is $\$1000$. However, that doesn't mean you'll always end up with exactly$\$1000$. What if you lose all $100$ games? It's unlikely, but still possible, and you'd end up with $\$900$. With the same low probability ... 5 OK, let's talk through the thought process. When I see a difference of two fractions, I give them common denominators first, as per your second$=$. I can't help but factorise the new numerator's difference of two squares after that. Since there's a division by$\sin^2 x$, which$\to0$as$x\to0$, I need to take out a$\left(\frac{\sin x}{x}\right)^{-2}$... 4 Once you know that$\dfrac{\sin x}{x} \to 1$, and similar, fairly simple limits, it becomes natural to try to make them appear by extracting them from more complicated expressions. 2 Identically distributed random variables are just random variables that have the same pdf. Here are some examples: Joe flips a coin 100 times and then writes down the number of heads that come up. Let's call this variable$X_{\textrm{Joe_Heads}}.$Now Max does the same thing and writes down the number of heads he gets. Let's call this random variable$X_{\...
2
I understand the logic to be the same in the case of Bernoulli random variables as in the case of Binomial r.v.s because of independence. In the case of Bernoulli r.v.s, just as in the case of the binomial, the concept of variance is implicit in the notion of expected value itself. That is, the more the expected value per trial deviates from 1 or 0, the more ...
2
Let $\mathcal T$ be the sheaves of type T, for $X\in\mathsf C$ let be $\mathcal T\downarrow X$ be the category of arrows $T\to h_X$ for $T\in\mathcal T$ and let $\mathcal G$ be the Grothendieck construction of $X\mapsto \mathcal T\downarrow X$. Elements of $\mathcal G$ are of the form $r=\{r_{T,X}\overset{r_{f,X}}\to h_X\}_{X\in\mathsf C}$. Let $f:F\to G$ ...
2
Here's another proof (copied from brilliant.org) of the infinite series, but for arbitrary $r<1$. I wonder if it can be adapted for the finite case by thinking about a trapezoid like this instead of a triangle....
2
See this images: This are a graphic explaination of the sum of the geometric progression of ratio $\frac{1}{2}$.
2
There are some nice and succinct answers already. If you'd like even more intuition with as little math and higher level linear algebra concepts as possible, consider two arbitrary vectors $v$ and $w$. Simplest Answer Take the dot product of one vector with the projection of the other vector. $$(P v) \cdot w$$ $$v \cdot (P w)$$ In both dot products ...
1
If $V$ is a subspace of $\mathbb R^n$ and $V\neq\{0\}$, then there is some $v\in V$ such that $v\neq0$. But then the line $\{\lambda v\mid\lambda\in\mathbb R\}$ is a subset of $V$. So, no ball is a subspace of $\mathbb R^n$ and the same argument applies to spheres. Actually, the same argument applies to any bounded subset $\mathbb R^n$ other than $\{0\}$.
1
Ok, so I think the issue is your interpretation of the graph. Notice how the graph of $\{f(a_n)\}$ starts out at $n=1$ at a vertical value of $16$. This makes sense because $(3+\frac{1}{1})^2 = 16$. Then, notice how it decreases from $16$ and slowly flattens out to a value of $9$. This makes complete sense because \begin{align} \lim_{n\to \infty} \left(3+\...
1
I don't know if a vanishing cycle is always non zero in the case of an algebraic Lefschetz pencil, I recall something like that but I am not sure at all. If this is true, it will follow a posteriori from the cohomological study. At first glance, this is not obvious at all. Also, recall that vanishing cycles can be defined in a non algebraic context where ...
1
Draw a segment one unit long. Tick at the first third from the left. On the right side, tick at the first third from the left. On the right side, tick at the first third from the left. On the right side, tick at the first third from the left. … When you are done, you have the infinite sum for $a=\frac13,r=\frac23$. If you stop before infinity, the ...
1
The question is a little long and hard to follow (it would be better if it was posted as multiple different questions), but from what I can see this is what you want: Yes, when you have a multi-variate system you need to multiply the polynomial bases together as you have described. Yes, this can create a really large basis is in the end. This phenomenon is ...
1
@Neal has provided a superb explaination. There is also a Yotube video giving some detailed understanding: https://www.youtube.com/watch?v=BCWBT3OTzNk&list=PLpRLWqLFLVTCL15U6N3o35g4uhMSBVA2b&index=27
1
As per Mike Earnest argument in the comments, there are scenarios in which you need $\lceil n / 2 \rceil$ lines: If the given points all lie on the same circle, dividing that circle into $n$ arcs, then each line can only cross two arcs, so at least $n/2$ lines are necessary. (If any arc is uncrossed, then the two points at the end are not separated). Now ...
1
As a partial answer: we can always use about $\frac34n$ lines. (We need $\frac34n -1$ when $n$ is divisible by $4$, but slightly more or fewer in other cases.) This assumes we don't care whether the regions are finite or infinite; as mentioned in the comments, if you want all regions to be finite, we can just use $3$ more lines at the beginning to draw a ...
1
As I understood, (see here) a signed $n$-bit integer $a$ (that is a integer $a$ such that $-2^{n-1}\le a\le 2^{n-1}-1$) is presented in a memory in the unsigned form of its $n$-complement $a^*$, which is a binary representation of a unique integer $0\le a’\le 2^n-1$ such that $a=a’\pmod 2^n$. See also a bit below In two's complement notation, a non-...
1
You might try the sequence $$0, 1, \frac{1}{2}, \frac{1}{3}, \frac{2}{3}, \frac{1}{4}, \frac{3}{4}, \frac{1}{5}, \ldots$$ enumerating all rationals in $[0,1]$, which has all sorts of interesting convergent subsequences.
1
Suppose we set n = 10. It is possible but unlikely that one player wins all 10 flips, scoring 10 points. There is a $(\frac 12)^{10}$ chance of this happening. There is a $10(\frac 12)^{10}$ that he scores 8 points. A $45(\frac 12)^{10}$ that he scores 6 points, a $120(\frac 12)^{10}$ that he scores 4 points, etc. And he can score negative points just ...
1
Your point is very interesting. I would say that both expected value standard deviation would make sense after some $n$ throws. In that game we would expect $Y \rightarrow 0$ because if the result is tails you subtract 1 from the score and over time you would expect to have roughly the same number of heads as you would tails. And the standard deviation ...
1
I don't see any contradiction. The standard deviation indicates what values we should expect, and the expected value, counter intuitive to its name, gives the average of the values.
1
This formula is a concise and expressive version of Koszul formula. This is just the matter of regrouping the terms. It shows that the Levi-Civita covariant derivative is given with a formula, which employs only the Lie derivative, the exterior derivative, and the given Riemannian metric. I find this formula very illuminating, because the Lie derivative ...
1
The integral on left-hand side (divided by $b-a$), which I denote by $\bar u$ in the visualization, can be interpreted as an average over the trajectory of $u$. I have tried to visualize it for two different trajectories in $\mathbb R^2$. In some sense, it is a weighted average over the points of the trajectory, since "segments" where the velocity is high ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2019-08-24 18:33:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9045583009719849, "perplexity": 186.56752108907295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321351.87/warc/CC-MAIN-20190824172818-20190824194818-00214.warc.gz"}
|
https://math.stackexchange.com/questions/2103339/how-to-use-nonlogical-axioms-in-deduction
|
# How to use nonlogical axioms in deduction
I'm a little bit unclear about the concept of nonlogical axioms and their potential usage in deduction.
According to this question, nonlogical axioms are those are which not universally valid and could not be presented by logical quantifiers and connectives.
Formal definition of deduction includes three seminal concepts: logical axioms, nonlogical axioms and rules of inference. If we admit that nonlogical axioms could not be stated in terms of logical concepts, how could we use them in deduction based inference rules?
• See e.g. first order Peano axioms : in order to prove theorem about arithmetic, we need logical axioms, like e.g. $\forall x (x=x)$, as well as "specific" arithmetical axioms, like e.g. $\forall n (S(n) \ne 0)$. – Mauro ALLEGRANZA Jan 18 '17 at 18:35
• "nonlogical axioms are those are which not universally valid" : CORRECT. – Mauro ALLEGRANZA Jan 18 '17 at 19:37
• "could not be presented by logical quantifiers and connectives" ? Of course, a non-logical axiom need some symbol in addition to quantifiers and connectives (and equality), like the binary predicates $\in$ (for set theory) and $<$ (for arithmetic). But we can use them also in universally valid sentences, like e.g. : $\forall x \forall y(x \in y) \lor \lnot \forall x \forall y (x \in y)$. – Mauro ALLEGRANZA Jan 18 '17 at 19:39
• Even if one defines $\le$ in the reference language of the axiom, its usage within the axiom labels it to a "nonlogical" one, right? – Roboticist Jan 18 '17 at 22:10
• No, axioms are (complete) sentences. For instance, if $\circ$ is the function symbol to be interpreted as the group operator, $\forall a \,. \forall b\,. \forall c \,. (a \circ b) \circ c = a \circ (b \circ c)$ is a nonlogical axiom. – Fabio Somenzi Jan 18 '17 at 18:37
• @Roboticist Maybe it's useful to point out that there are two distinct, albeit related, concepts: nonlogical symbols and nonlogical axioms. The constant, function, and relation symbols (and, according to some authors, also the quantifiers) are known as nonlogical symbols. Their meanings depend on their interpretations. Logical symbols (like $\wedge$) never change their meaning. – Fabio Somenzi Jan 18 '17 at 22:38
|
2019-08-24 09:27:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7731500267982483, "perplexity": 999.5570586314042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027320156.86/warc/CC-MAIN-20190824084149-20190824110149-00400.warc.gz"}
|
https://math.stackexchange.com/questions/4208813/prove-f-to-m-is-a-smooth-vector-bundle
|
# Prove $F\to M$ is a smooth vector bundle
Let $$F\to M$$ denote the vector bundle over embedded Riemann submanifold $$M\subset \tilde{M}$$.
With each fiber be the set of all bilinear maps $$T_pM\times T_pM \to N_pM$$.Prove this vector bundle is smooth.
My attempt:To do this we can use Vector bundle chart lemma say for example in Lee's ISM book Prop 10.6.
Which needs to give the local trivialization map $$\Phi:\pi^{-1}(U)\to U\times \Bbb{R}^k$$,we may choose it as follows:take the adapted orthnormal frame $$(E_1,...,E_m,E_{m+1},...,E_n)$$ where the first $$m$$ correspond to the embedded submanifold $$M$$ that is $$(E_1,...,E_m)$$ span the tangent space $$T_pM$$ and $$(E_{m+1},...,E_n)$$ span $$N_pM$$.
Then all the bilinear map can be represented as a $$\Bbb{R}^{m^2\times(n-m)}$$ matrix under the chooice of orthnormal frame.Which gives exactly the local trivialization map.
The key point to check this is a smooth vector bundle is to check that transition map is smooth,then we need to consider two different othnormal frame,and see whether is smooth or not.I find it hard to check this transition is smooth,it's there some idea to handle this problem?
I have alternative idea to handle this problem,Using the natural identification between :linear map $$V\to W$$ and bilinear form $$V\times W^* \to \Bbb{R}$$
Hence each fiber is exactly $$T_pM\times T_pM \times N^*_pM \to \Bbb{R}$$,Then we can do just like what we did in proving mix tensor bundle is smooth.
It is a general fact that the (functorial) operations of linear algebra extend naturally to vector bundles. You can see this by using the basic result that a vector bundle is determined (up to isomorphism) by a cover $$\{U_{\alpha}\}$$ of the base space and functions on this cover $$\{g_{\alpha\beta}\}:U_{\alpha\beta}\rightarrow\text{GL}(n,\mathbb{R})$$ which satisfy the cocyle condition.
For instance, if bundles $$E_i$$ are determined by cocycles $$\{g_{\alpha\beta}^i\}$$ for $$i=1,2$$, you can define their direct sum $$E_1\oplus E_2$$ as the bundle determined by the transition functions $$\{g_{\alpha\beta}^1\oplus g_{\alpha\beta}^2\}$$. You indeed have to see differentiability, but that is obvious in the matrix form: $$g_1\oplus g_2=\begin{pmatrix} g_1 & 0\\ 0 & g_2 \end{pmatrix}$$ Similarly, you can define the tensor product and the dual of bundles. Moreover, the identification you mentioned: $$\text{Hom}(V,W)\simeq V^*\otimes W$$ allows the definition of the bundle $$\text{Hom}(E_1,E_2)$$ as before.
Now your bundle is $$\text{Hom}(TM\otimes TM,NM)$$.
|
2022-05-16 23:08:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9840709567070007, "perplexity": 128.05592082924315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662512249.16/warc/CC-MAIN-20220516204516-20220516234516-00774.warc.gz"}
|
https://quuxplusone.github.io/blog/2021/01/13/conversion-operator-lookup/
|
# Fun with conversion-operator name lookup
As of this writing (but perhaps not for very much longer!) the four mainstream compilers on Godbolt Compiler Explorer give four different answers for this simple C++ program:
struct A {
using T = T1;
using U = U1;
operator U1 T1::*();
operator U1 T2::*();
operator U2 T1::*();
operator U2 T2::*();
};
inline auto which(U1 T1::*) { return "gcc"; }
inline auto which(U1 T2::*) { return "icc"; }
inline auto which(U2 T1::*) { return "msvc"; }
inline auto which(U2 T2::*) { return "clang"; }
int main() {
A a;
using T = T2;
using U = U2;
puts(which(a.operator U T::*()));
}
The question is whether U should be looked up in the scope of test or in the scope of A; and the same question for T.
According to the current draft standard, it sounds like the conforming answer is “they should both be looked up in the scope of A”; i.e., GCC’s answer is correct and the others are wrong in three different ways. [basic.lookup.unqual]/5:
An unqualified name that is a component name of a type-specifier or ptr-operator of a conversion-type-id is looked up in the same fashion as the conversion-function-id in which it appears. If that lookup finds nothing, it undergoes unqualified name lookup; in each case, only names that denote types or templates whose specializations are types are considered.
I’m never a fan of lookups that don’t consider certain kinds of names; I’m sure there’s more divergence to be discovered in this area. Anyway, in the type name U T::*, U is the type-specifier and T::* is the ptr-operator, and the whole type is pronounced “pointer to a data member of T, where that data member itself is of type U.” (More concisely: “pointer to data member (of type U) of T,” or “pointer to a U member of T.”)
|
2021-05-06 09:40:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26919007301330566, "perplexity": 6275.556412359699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988753.91/warc/CC-MAIN-20210506083716-20210506113716-00232.warc.gz"}
|
https://www.esaral.com/q/the-radii-of-the-ends-of-a-bucket-30-cm-high-are-21-cm-and-7-cm-39555/
|
The radii of the ends of a bucket 30 cm high are 21 cm and 7 cm.
Question:
The radii of the ends of a bucket 30 cm high are 21 cm and 7 cm. Find its capacity in litres and the amount of sheet required to make this bucket.
Solution:
Height of the bucket = 30 cm.
$r_{1}=21 \mathrm{~cm}$
$r_{2}=7 \mathrm{~cm}$
Therefore,
Capacity of the bucket
$=\frac{\pi h}{3}\left[r_{1}^{2}+r_{1} r_{2}+r_{2}^{2}\right]$
$=\frac{22}{7} \times \frac{30}{3}\left[(21)^{2}+21 \times 7+(7)^{2}\right]$
$=20020 \mathrm{~cm}^{3}$
$=20.02$ litres
The slant height of the bucket
$l=\sqrt{h^{2}+\left(r_{1}-r_{2}\right)^{2}}$
$=\sqrt{900+(21-7)^{2}}$
$=\sqrt{900+196}$
$=\sqrt{1096}=33.105 \mathrm{~cm}$
$=\sqrt{900+(21-7)^{2}}$
$=\sqrt{900+196}$
$=\sqrt{1096}=33.105 \mathrm{~cm}$
$=\pi\left(r_{1}+r_{2}\right) \times l$
$=\pi(21+7) \times 33.1$
$=88 \times 33.1$
$\approx 2913 \mathrm{~cm}^{2}$
Area of the base
$=\pi r^{2}$
$=\frac{22}{7} \times 7^{2}$
$=154$
Total sheet required to make this bucket
$=2913+154$
$=3067 \mathrm{~cm}^{2}$
|
2022-05-16 17:44:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6744504570960999, "perplexity": 1299.4297003339746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662512229.26/warc/CC-MAIN-20220516172745-20220516202745-00274.warc.gz"}
|