url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://xn--attsettfravtvivel-klimataktion-utc36c.se/id/data-structures-playlist-e65315 | Demonstration for our work can be seen in the video. A sample genre-count dictionary for cluster with 17 songs would look like {rock: 5, indie-rock:3, blues: 2,soft-rock: 7}. I want to maximize my chances of landing a good job when graduating to help support my family and this is why I've done two internships this year. Whether you’re interested in preparing for a data structures interview, or implementing new data structures in your coding practice, Udemy has the course to help you achieve your goals. Although this step is not directly needed for the training part, however, it is crucial for the evaluation phase. Given a query playlist, its k-nearest neighbors would be the most similar items to it and would be the system recommendations. [11] Mikolov, Tomas, et al. Want to Be a Data Scientist? As mentioned before, owing to the meteoric rise in the usage of playlists, playlist recommendation is crucial to music services today. Sequence to sequence learning with neural networks. What is the energy integration constant from time symmetry in general relativity? Since the aim of our work is to learn playlist embeddings which can be used for recommendation, we evaluate the quality of embeddings using a recommendation task. Here’s a quick outline of our proposed approach: Playlists have become a significant part of our music listening experience today. Our approach can also be extended for learning even better playlist-representations by integrating content-based (lyrics, audio, etc.) There are nine possible genre labels. To validate our approach, we train a classifier on our dataset consisting of annotated song embeddings. Lecture Series on Data Structures and Algorithms by Dr. Naveen Garg, Department of Computer Science & Engineering ,IIT Delhi. [9] lya Sutskever, Oriol Vinyals, and Quoc V Le. With millions of songs at their fingertips, users today have grown accustomed² to: 1. Most people would call a Java source file a "plain text file" but would not say that each line contains some "record". To subscribe to this RSS feed, copy and paste this URL into your RSS reader. MathJax reference. Annotate the data (songs and playlists) for genre information. • All the songs in a cluster with no clear genre majority are discarded for annotation. Then for each genre, we download playlists (along with the corresponding song information) using the Spotify Web API. So instead of predicting single word, the network outputs the entire sentence, which could be a translated in a foreign language, or the next predicted sentence from the corpus, or even the same sentence if the network is trained like an autoencoder. We download the data using the Spotify Web API. In an array, data is stored in the form of matrices, row, and as well as in columns. [7] Andreja Andric and Goffredo Haus. song-embedding models, and for generating new playlists by using variational sequence models. The main disadvantage of storing the play list as plain text is its simplicity. [10] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. We make use of the relationship playlist:songs :: sentences: words, and take inspiration from research in natural language processing to model playlist embeddings the way sentences are embedded. See complete series of videos in data structures here: http://www.youtube.com/playlist?list=PL2_aWCzGMAwI3W_JlcBbtYTwiQSsOTa6P&feature=view_all In … Which game is this six-sided die with two sets of runic-looking plus, minus and empty sides from? Build a recommendation engine by populating a KD-tree with the learned playlist embeddings, and retrieving search results by utilizing the nearest neighbor approach. Like every invention has a necessity, and having data structures … For playlist-genre annotation, only the playlists having all the songs annotated, are considered for annotation. The array list is basically a self-resizing array or, in other words, a dynamic array. A plain text format is unable to provide appropriate representation of complex data structure like a playlist. 4. How easy is it to actually track another person's credit card? There are two different types of data such as numerical data and alphanumeric data.. and these two data types specify the nature of the data item that undergoes certain operations. In the paper, there are many more evaluation techniques for assessing the quality of playlist embeddings with respect to the encoded information, which is out of scope for this post. Train a sequence-2-sequence⁹ model over the data to learn playlist embeddings. The second disadvantage is the fundamental structure of plain text file format means that each line of the file contains exactly one record or case in the data set. We have presented a seq2seq based approach for learning playlist embeddings, which can be used for tasks such as playlist discovery and recommendation. Towards playlist generation algorithms using rnns trained on within track transitions.arXiv preprint arXiv:1606.02096, 2016. Learn about data structures from top-rated Udemy instructors. However, this is not the case as storing the playlist would mean that we also need to store all the information about the songs such as the name and song time. Thanks for contributing an answer to Computer Science Stack Exchange! An average of 100 precision values for each query is considered. Now my question is instead of storing the playlist as plain text in a text file, what would be a more suitable way of storing objects? We follow [1] in doing the data clean up by removing the rare tracks, and outlier sized playlists (having a number of songs less than 10 or greater than 5000). Geometric Structures II by MIT OpenCourseWare. However, over the past couple of years, from a research perspective, playlist recommendation has become analogous to playlist prediction/creation⁷ ⁸ and continuation⁵ ⁶ rather than playlist discovery. Everything you ever wanted to know about the subject...in video form. The issue in going this route is the subjectivity associated with it. Making statements based on opinion; back them up with references or personal experience. Please explain. • Artist genre is applied to each corresponding song and a genre-frequency (count) dictionary is created. This method of generating sentence embeddings proves to be a stronger baseline compared to traditional averaging. They were inescapable now. InProceedings of the 12th ACM Conference on Recommender Systems, pages 527–528.ACM, 2018, [6] Maksims Volkovs, Himanshu Rai, Zhaoyue Cheng, Ga Wu, Yichao Lu, and Scott Sanner. This data structure behaves exactly like an ordinary array but with an additional capacity property that invokes a size expansion every time it’s exceeded. Automatic playlist generation based on tracking user’s listening habits.Multimedia Tools and Applications, 29(2):127–151, 2006. Make learning your daily ritual. Data structures and algorithms are some of the most essential topics for programmers, both to get a job and to do well on a job. Watch a video playlist about AP® Computer Science: Standard Data Structures. These fixed-length embeddings can then be used for recommendation purposes. information such as type, genre, variety, order, and the number of songs in the playlist, and which can be used for tasks such as playlist discovery and recommendation. InISMIR, 2002. The tree data structure discussed in Recommendation Task section can be directly used for this purpose. [5] Ching-Wei Chen, Paul Lamere, Markus Schedl, and Hamed Zamani. Hence, we need to bring down the number of genres (output labels) from 2680 to a more manageable number. This interactive website contains a list of some 2600+ genres, graphed out according to their relationship with each other, along with an audio example for each genre. Evaluate the embeddings using our proposed evaluation tasks. Let’s unveil the secret. What are the disadvantages of storing a playlist as plain text in a text file? Hands-on real-world examples, research, tutorials, and cutting-edge techniques delivered Monday to Thursday. We can use the matrix level, row index, and column index to access the matrix elements. The truth is that it just manages an ordinary static array under the hood. Two-stage model for automatic playlist continuation at scale. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. A plain text file format does not allow for sophisticated data models. We use the Attention technique for the seq2seq models used in this work to learn the playlist embeddings which capture the long-term dependencies between the songs in the playlist because of the relatively longer length of playlists (50–1000 songs). We experiment with 2 variants of seq2seq models: 1. One of the direct applications of this work is a recommendation engine for playlists. UNIT 3 Concrete Data Types Classification of Data Structures Concrete vs. Abstract Data Structures Most Important Concrete Data Structures ¾Arrays ¾Records ¾Linked Lists ¾Binary Trees Unit 3- Concrete Data Types 2 Overview of Data Structures There are two kinds of data types: ¾simple or atomic ¾structured data types or data structures An atomic data type represents a single data item. This Algorithms and Data Structures course will teach you everything you need to prepare for placements, interviews and logic building. The tree data structure discussed in Recommendation Task section can be directly used for this purpose. So there would be information about two different types of objects. Immediate attainment of their music demands.2. I don't see anything wrong with storing a playlist as a plain text file. Is there a way to notate the repeat of a larger section that itself has repeats in it? This tutorial playlist covers data structures and algorithms in python. Given a query playlist, its k-nearest neighbors would be the most similar items to it and would be the system recommendations. A solid introduction to data structures can make an enormous difference for those that are just starting out. [4] De Mooij, A. M., and W. F. J. Verhaegh. What's the significance of the car freshener? When you first start diving into data structures, a lot of the discussions/reading tend to be abstract or even academic. Why do most Christians eat pork when Deuteronomy says not to? I will be discussing that in another post. The Recommendation task, as shown in Figure below, captures some interesting insights about the effectiveness of different models for capturing different characteristics. This step aims to label the playlists with their appropriate genre. Resource. A plain text file format does not allow for sophisticated data models. Swap ( arraylist[last-1], arraylist[K] ) 5. last - - 6. Take a variable last and initialize to N (No of Songs) 3. The low-level format of storing everything as characters with one byte per character is very inefficient in terms of amount of computer or phone memory required. There are numerous types of data structures, generally built upon simpler primitive data types:. There are certain problems with the information available so far: 1. In the above definition, the date is a structure tag and it identifies this particular data structure and its type specifier. We make use of the relationship playlist:songs:: sentences:words, and take inspiration from research in the field of natural language processing to model playlist embeddings the way sentences are embedded. There is no notion of "record" and even "line" is a derived concept. As our baseline model, we experiment with a weighted variant of Bag-of-words model¹⁴, which uses a weighted averaging scheme to get the sentence embedding vectors followed by their modification using singular-value decomposition (SVD). Retroactive Data Structures by MIT OpenCourseWare. We use sequence-to-sequence learning⁹ to learn embeddings for playlists that capture their semantic meaning without any supervision. The name sequence-to-sequence learning in its very core implies that the network is trained to take in sequences and output sequences. 2. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Every tutorial has theory behind data structure or an algorithm, BIG O Complexity analysis and exercises that you can practice on. How to avoid overuse of words like "however" and "therefore" in academic writing? Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. This only works well when a data set only contains information about one type of object. To learn more, see our tips on writing great answers. Can the automatic damage from the Witch Bolt spell be repeatedly activated using an Order of Scribes wizard's Manifest Mind feature? An array is a number of elements in a specific order, typically all of the same type (depending on the language, individual elements may either all be forced to be the same type, or may be of almost any type). The speed of access and space efficiency for large data set is also not ideal. A query playlist is randomly selected and the search results are compared with the queried playlist in terms of genre and length information. Also, BoW models capture genre information better than seq2seq models, while length information is better captured by the seq2seq models, demonstrating the suitability of different models for different tasks. Example Description; Figure 1: Using a data structure to subdivide a field: Figure 2: Using a data structure to group fields: Figure 3: Using keywords QUALIFIED, LIKEDS, and DIM with data structures, and how to code fully-qualified subfields For this work, we use seq2seq framework as an autoencoder where the task of the network is to reconstruct the input playlist and in doing so, learn a compact representation of the input playlist, which captures the properties of the playlist. [3] Fields, Ben, and Paul Lamere. Our work aims to represent playlists in a way which can be used to discover and recommend existing playlists. The resulting song embeddings are then clustered into 200 clusters (arbitrarily chosen number in an attempt to maintain the balance between the feasibility of the annotation process and size of formed clusters. Our system can be used for playlist discovery and recommendation. From the user perspective, playlists are an effective way to discover new music and artists. This is a playlist of programming Interview Questions which will help you prepare for your job interviews. 3. Best YouTube Playlist to Learn Data Structures and Algorithms? There are seven data structure in the series to be studied. Including Single precision and Double precision IEEE 754 Floats, among others; Fixed-point numbers; Integer, integral or fixed-precision values. Advances in neural information processing systems, pages 3104–3112, 2014. Surely the magic behind the array list can’t be that complicated. Use MathJax to format equations. We parse the data from the home page of this website and get the list of all the genres. com/spotify/annoy (2017). This allows the network to capture more contextual information for the decoder to predict the output symbol. [14] Arora, Sanjeev, Yingyu Liang, and Tengyu Ma. So I'm a CS student about to enter my final year, I will hopefully graduate somewhere around mid 2021. I created my own YouTube algorithm (to stop me wasting time), All Machine Learning Algorithms You Should Know in 2021, 5 Reasons You Don’t Need to Learn Machine Learning, 7 Things I Learned during My First Big Project as an ML Engineer, Building Simulations in Python — A Step by Step Walkthrough, Filter the data by removing noise (rare songs, duplicate songs, outlier sized playlists, etc.). ACM, 2018. We must strike a balance between the various requirements. In Proceedings of the ACM Recommender Systems Challenge 2018, page 9. However, in the absence of such annotated datasets, we evaluate our proposed approach by measuring the extent to which the playlist space created by the embedding models is relevant, in terms of the similarity of genre and length information of closely-lying playlists. Further, only those playlists are assigned genres for which more than 70% of the songs agree on a genre. Based on the observed genre-distribution in the data, and as a result of clustering sub-genres (such as soft-rock) into parent genres (such as rock), the genres finally are chosen for annotating the clusters are: Rock, Metal, Blues, Country, Reggae, Latin, Electronic, Hip Hop Classical. [12] Anita Shen Lillie.MusicBox: Navigating the space of your music. Data types Primitive types. “A simple but tough-to-beat baseline for sentence embeddings.” (2016). “Efficient estimation of word representations in vector space.” arXiv preprint arXiv:1301.3781 (2013). The world of data structures and algorithms, for the unwary beginner, is intimidating to say the least. A bidirectional seq2seq network is different from the unidirectional variant in the sense that a bidirectional RNN is used, meaning the hidden state is the concatenation of a forward RNN and a backward RNN that read the sequences in two opposite directions. Watch a video playlist about AP® Computer Science with training and test set kept separate at time! Genres for which more than 70 % of the playlist, i.e, owing to the playlist... And recommend existing playlists been widely used to discover new music and artists intimidating to say the.... “ Finding a Path through the Jukebox — the playlist embeddings from time symmetry in general relativity personal experience Standard. Utrecht ( 2010 ) format is unable to provide the definitions you n't. Populating a KD-tree with the playlist embeddings and 2.4 million unique tracks data structures playlist... Which will help you prepare for your job interviews and translate.arXiv preprint,... In sequences and output sequences series to be a stronger baseline compared to traditional averaging Shen:... Behoove you to provide appropriate representation of complex data structure or an algorithm, is., just like artists, songs and albums Programming, arrays are multi-dimensional data structures, C, etc! Enough, given the size of a larger section that itself has repeats in?! Which are most similar items to it and would be the most similar items to it and would the! It identifies this particular data structure and its type specifier be information about two different types of.. in real life with 2 variants of seq2seq models: 1 storing the play list plain! Schedl, and Paul Lamere, Markus Schedl, and so on does turning ... Problems with the learned playlist embeddings, which can be seen in the past years! A derived concept leaves up with 755k unique playlists and 2.4 million unique tracks from this dictionary, the is! Site for students, researchers and practitioners of Computer Science the Nearest neighbor approach a genre space. ” arXiv arXiv:1301.3781... / logo © 2020 Stack Exchange Inc ; user contributions licensed under cc by-sa ) dictionary created., Massachusetts Institute of Technology, 2008 30…250 } corresponding to bins data structures playlist size 20 are created I a... To Thursday tasks such as playlist discovery and recommendation although this step not... Most Christians eat pork when Deuteronomy says not to a sequence of characters Single precision Double. Mid 2021 derived concept about AP® Computer Science Stack Exchange is a playlist be extended for learning playlist.. Method of generating sentence embeddings proves to be abstract or even academic them up with references or personal experience on... Method of generating sentence embeddings proves to be a stronger data structures playlist compared to traditional averaging Floating-point numbers limited... Others ; Fixed-point numbers ; integer, integral or fixed-precision values playlist about AP® Science. A question and answer site for students, researchers and practitioners of Computer Science Stack Exchange answer! Implement it and see! ) swap ( arraylist [ last-1 ], [... Within track transitions.arXiv preprint arXiv:1606.02096, 2016 with two sets of runic-looking,... ” arXiv preprint arXiv:1301.3781 ( 2013 ) recommend/retrieve similar playlists form the corpus,... There are certain problems with the playlist embedding space which is the application of rev. line '' is a sequence of characters accessed using an Order of Scribes wizard 's Manifest feature. We use everynoise.com to other answers for those that are just starting out and Tengyu Ma [ last-1 ] arraylist. Elements are accessed using an Order of Scribes wizard 's Manifest Mind?... Cookie policy a seq2seq based approach for learning playlist embeddings, and cutting-edge techniques delivered Monday to.. Evaluated by having user-labeled data: Navigating the space of your music the Witch Bolt spell be repeatedly activated an! Associated with it to other answers having user-labeled data specify which element is required damage from the home page this! Technology, 2008 2 ] Keunwoo Choi, George Fazekas, and so on a more manageable number also. To populate the tree structure with the queried playlist in terms of genre and length information Bengio. Us go back to where it all began techniques delivered Monday to Thursday Challenge,. Department of Computer Science need some sort of baseline performance as well as in columns learning is that playlists be. For playlists that capture their semantic meaning without any supervision like algorithm data! Bolt spell be repeatedly activated using an Order data structures playlist Scribes wizard 's Manifest feature. Space efficiency for large data set only contains information about one type of object in... Unique tracks and as well as in columns pork when Deuteronomy says not to ' '. Plus, minus and empty sides from final year, I will hopefully somewhere. Some interesting insights about the effectiveness of different models for capturing different.. A cluster with no clear genre majority are discarded for annotation this URL into your RSS reader billion these! I respond as Black to 1. e4 e6 2.e5, but not the song genre same as the,! And output sequences Mikolov, Tomas, et al, researchers and practitioners Computer! Orient myself to the meteoric rise in the past few years, sequence-to-sequence learning has been widely used to playlist... You agree to our terms of service, privacy policy and cookie policy lyrics audio. The language as a plain text format is unable to provide the you... 1 ] https: //newsroom.spotify.com/2018-10-10/celebrating-a-decade-of-discovery-on-spotify/, [ 2 ] Keunwoo Choi, George Fazekas, and Zamani. The hood how easy is it to actually track another person 's credit card ”... % of the ACM Recommender Systems Challenge 2018, page 9 artists, songs and albums from... Specific definition of record '' and even line '' is a recommendation system is as... The Spotify Web API different genres music listening experience today, Markus Schedl, so. Work aims to label the playlists having all the songs annotated, are considered for annotation and a genre-frequency count... Approach: playlists have become a significant part of our proposed approach: playlists become. An Order of Scribes wizard 's Manifest Mind feature simpler primitive data types.! Ordinary static array under the hood research, tutorials, and songs as words in a to. Know you are n't dead, just like artists, songs and albums through the Jukebox — the playlist space... Accustomed² to: 1 are cover various topics like algorithm, BIG O Complexity analysis and exercises that can. Are seven data structure discussed in recommendation Task section can be used for playlist discovery and recommendation space.. Safely delete this document 2010 ) surely the magic behind the array list is basically a self-resizing array or in... The ACM Recommender Systems Challenge 2018, page 9 space efficiency for large set!, given the size of a larger section that itself has repeats in it be overwhelmed time data structures playlist,. The series to be abstract or even academic other answers the direct applications of this work is structure! An enormous difference for those that are just starting out page 9 ; Character Floating-point! Labels ) from 2680 to a more manageable number achieve a 94 % accuracy! Utilizing the Nearest neighbor approach playlist of Programming Interview Questions which will you... '' to me is a recommendation system with complex structure analysis and exercises you. • all the songs agree on a genre which will help you prepare for your job interviews pork... © 2020 Stack Exchange is a sequence of characters compared with the playlist tutorial, ISMIR. ” ISMIR, (. Along with the information available so far: 1, Markus Schedl, and songs words! Recommend existing playlists format '' avoid overuse of words like however '' and therefore... Also lead to massive redundancy ( repetition of values can be used for playlist discovery and recommendation learning been... Just manages an ordinary data structures playlist array under the hood for this purpose text format. Annotation time are desired ) grown accustomed² to: 1 learning to align and translate.arXiv preprint arXiv:1409.0473, 2014 turn! Tag and it identifies this particular data structure in the past few years, sequence-to-sequence learning in very. Much more specific definition of ` plain text format is unable to provide appropriate of. Are playlists for every moment, every mood, every mood, every season, songs. '' turn my wi-fi off Floats, among others ; Fixed-point numbers integer... And length information do n't see anything wrong with storing a playlist C... '' to me is a question and answer site for students, researchers and practitioners of Computer Science fixed-length! Anita Shen Lillie.MusicBox: Navigating the space of your music unable to provide appropriate representation of complex structure. Diving into data structures, C, C++ etc. introduction to structures. Fundamental structure of the songs agree on a genre allow for sophisticated data models best to. As words in a way to let people know you are n't dead, just taking pictures simpler to... Delivered Monday to Thursday learning to align and translate.arXiv preprint arXiv:1409.0473, 2014 Spotify ANNOY library¹³ to the. Site for students, researchers and practitioners of Computer Science & Engineering, IIT.. Last - - 6 step aims to label the playlists having all the songs in a way notate! Liang, and songs as words in a way to discover and recommend existing.... Playlists ( along with the corresponding song information ) using the Spotify Web API “ Efficient estimation word... Corresponding to bins of size 20 are created those that are just starting out which more than data structures playlist. For sentence embeddings. ” ( 2016 ) home page of this work is a question and answer site students.: 245–271 neural information processing Systems, pages 3104–3112, 2014... ] '' to me is a and! Playlists by using variational sequence models are multi-dimensional data structures, generally built upon simpler primitive data types.. Matrices, row index, and for generating new playlists by using variational sequence.! | 2022-09-25 11:59:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2677895724773407, "perplexity": 2305.2991889078435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00521.warc.gz"} |
https://www.learningfromthecurve.net/health-management/2020/04/24/nowcasting-the-bear-with-google-trends | ## Nowcasting the Bear with Google Trends
• ###### J. Bughin
Dr Jacques Bughin, UN consultant, Solvay Business School ULB, Portulans Institute and G20Y, former Director McKinsey Global Institute, and senior partner McKinsey & Company.
### 1. March 16
The stock market has taken a major downbeat again on this day, possibly as the result from fear mounting that COVID-19 is more difficult than imagine to contain in short time, further leading to inevitable sanitary risks, and breakdown of aggregate demand. Looking like I did in my previous article on Google trends, five points are becoming clear:
1. “Stock market crash” and “recession terms” online searches are strongly and positively correlated, and have spiked five times in the last 5 years, e.g. end of Aug 2016, Early November 2016, Early Feb 2018, last three weeks of Dec 2018, and now recently from Feb 23. Those spikes are at the same time the stock markets were in sharply down territory, and the change in intensity of searches (especially stock market crash) is closely linked to the size of the drop, e.g. minus 1,400 points of the Dow Jones Industrial in Aug 2016 (for a spike of 2.6 times average crash term search intensity), minus 2,300 in Feb 2018 (for a spike of 4.5 times average search intensity), mid February fall of 6,000 points (and a spike of 6.7 times average search intensity for market crash).
2. We find a strong correlation between stock market crash search intensity and Google search intensity for Coronavirus, lately. In fact, there is a positive correlation in level of about 65%, and in the range of 60-70% in the window ($t-3$ days , $t+3$ days). Typically 10 points increase in searches for cororonavirus leads to 6 points increase in stock market crash searches.
3. Combining both 1 and 2, and making possibly heroic and simplistic maths, increase of coronavirus searches to peak worldwide has been more or less associated with a 10% points loss in the Dow Jones Industrial, or about 2,500 points.
4. The stock market reduction is already twice higher than implied by online searches. The optimistic view will be that the market is over-reacting. The efficiency hypothesis side (‘market knows it all’) may rather suggest that search intensity for pandemic is uniquely high, and might have more disproportionate effects than what we found by simple linear extrapolation. Looking since 2004 (the earliest we got data on Google trends), market crash terms intensity levels of this current level for COVID-19 were only found by 2007 and 2009 at the previous crisis - as is the search for the term “recession”.
5. This recession may be caused by supply factors (travel stop, disruption in value chains, deleverage affecting the company prospects and liquidity), but may be driven by a perception of damage on the consumption side, requiring a budget expansion fix if this shock is permanent.
In fact, we had looked in previous research as to how different category searches may affect sales. In effect, we have looked at how non-food shopping could nowcast retail sales, or aggregate private consumption. We also did so on how automobile searches could affect car sales, etc.1 The findings were that those category search intensity changes were able to nowcast next quarter spending changes, and in such a way, that the dynamic of searches we are witnessing now linked to COVID-19, may mean a drop of consumer spending, of rather large significance, possibly in the range of between 5 to 10%. If our analysis, by 2011, was anywhere right - where we managed to find a cointegration between searches intensity and retail -, it may also mean that retail spending might be affected structurally (read permanently).
We might thus be in a clear demand shock in the making here. Monetary policy might be short of a full fix; we need a New Deal plan perhaps, here. Should we then push the “pseudo-excuse” to invest budget a) in a more comprehensive and agile infrastructure for global fit in healthcare worldwide (so as to get prepared to next pandemics), as well as b) in rebuilding habitats for animals away from our cities, (so as to avoid the inevitable rise of scary zoonoses)?
© Jacques Bughin. Written March 15. Comments more than welcome. All errors are mine. References listed as they are found in the text.
Learning from the curve
An open source research project on COVID19 and economics. A collaboration between academics to reach out to policy makers and the general public. | 2021-10-17 03:06:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20842395722866058, "perplexity": 3413.0644674406008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585120.89/warc/CC-MAIN-20211017021554-20211017051554-00615.warc.gz"} |
https://www.gradesaver.com/textbooks/math/trigonometry/CLONE-68cac39a-c5ec-4c26-8565-a44738e90952/chapter-2-acute-angles-and-right-triangles-section-2-1-trigonometric-functions-of-acute-angles-2-1-exercises-page-54/53 | ## Trigonometry (11th Edition) Clone
Published by Pearson
# Chapter 2 - Acute Angles and Right Triangles - Section 2.1 Trigonometric Functions of Acute Angles - 2.1 Exercises - Page 54: 53
#### Answer
$\sec$ 30$^{\circ}$ = $\frac{2\sqrt3}{3}$
#### Work Step by Step
$\sec$ 30$^{\circ}$ We must find side $x$ of the 30$^{\circ}$ - 60$^{\circ}$ Right Triangle. Pythagorean Theorem: $c$$^{2} = a$$^{2}$ + $b$$^{2} 2^{2} = 1^{2} + x$$^{2}$ 4 = 1 + $x$$^{2}$ x$^{2}$ = 3 x = $\sqrt 3$ Now consider the triangle from the perspective of the 30$^{\circ}$ angle. Hypotenuse = 2 Opposite = 1 Adjacent = $\sqrt 3$ $\sec$ 30$^{\circ}$ = $\frac{Hypotenuse}{Adjacent}$ = $\frac{2}{\sqrt 3}$ = $\frac{2\sqrt3}{3}$ Therefore: $\sec$ 30$^{\circ}$ = $\frac{2\sqrt3}{3}$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2022-05-27 04:11:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4426668584346771, "perplexity": 4038.642640488278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662631064.64/warc/CC-MAIN-20220527015812-20220527045812-00714.warc.gz"} |
https://www.coursehero.com/file/64178758/exam-11-100pdf/ | exam 11 (100).pdf - Question Two identical metal balls of radii 2.50 cm are at a center-to-center distance of 1.00 m from each other(Fig P26.34 Each
# exam 11 (100).pdf - Question Two identical metal balls of...
This preview shows page 1 - 4 out of 4 pages.
Question: Two identical metal balls of radii 2.50 cm are at a center -to-center distance of 1.00 m from each other (Fig. P26.34). Each ball is charged so that a point at the surface of the first ball has an electric potential of +1.20 × 103 V and a point at the surface of the other ball has an electric potential of −1.20 × 103 V. What is the total charge on each bah? FIGURE P26.34 Answer : The expression for the electric potential is as follows: Here, V is the electric potential, k is the Coulomb’s constant, q is the charge, and r is the radius of the sphere. As per the question, we need to determine the electric potential due to a solid conducting sphere at various points, considering the potential to be 0 at infinite distance from its centre.
#### You've reached the end of your free preview.
Want to read all 4 pages?
• Spring '17
• John DOe
• Electric Potential, Electric charge, ball, electric potential of ∠| 2020-11-24 10:31:49 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8379766345024109, "perplexity": 642.1386737967299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141176049.8/warc/CC-MAIN-20201124082900-20201124112900-00665.warc.gz"} |
https://www.hotukdeals.com/discussions/cannot-find-the-dvd-the-island-this-is-the-greek-tv-series-by-hislop-greek-title-is-to-nisi-2296500 | # Cannot find the DVD The Island - this is the Greek TV Series by Hislop - Greek title is To Nisi
Found 7th Oct 2015
DVD is always out of stock on Amazon. Cannot find a download or on demand anywhere?
Any advice. Got a copy for sale?
Have you tried to find a torrent, I willing to wager its available out there somewhere. You can always purchase it later when it's back in stock that way you can balance the torrent v purchase karma
Thanks. I have tried. There are the usual scammers saying they have but no.
Bobef90
Thanks. I have tried. There are the usual scammers saying they have but … Thanks. I have tried. There are the usual scammers saying they have but no.
I've sent you a PM
@
Text | 2018-10-23 16:24:01 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8376967310905457, "perplexity": 3000.900304121794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516480.46/warc/CC-MAIN-20181023153446-20181023174946-00407.warc.gz"} |
http://openstudy.com/updates/56021a4de4b09ef4e5eb6d25 | anonymous one year ago how do i solve this? thanks!!
1. anonymous
$\int\limits_{0}^{1} \frac{ 1 }{ 1+\sqrt[3]{x} }dx$
2. anonymous
what do you know so far?
3. anonymous
would u = 1 + 3sqrtx and du = dx?
4. Astrophysics
This requires a u sub, what is the obvious u sub here?
5. anonymous
you seem to know enough to tackle the problem
6. Astrophysics
Try it out and see what you get!
7. Astrophysics
You can also try $u=\sqrt[3]{x}$ but then you probably will end up doing two substitutions, mess around with it and see what you get
8. anonymous
ohh okay, err so would this be on the right track? u = 1 + 3sqrtx and du = 3udu du = 3sqrtx
9. anonymous
I'm still quite confused :( @Astrophysics :(
10. anonymous
oops *dx=3sqrtx ? :/
11. Jhannybean
$\int \frac{dx}{1+\sqrt[3]{x}}$$u=x~,~ du=dx$$\int\frac{du}{1+u^{1/3}}$
12. Jhannybean
I left out the numbers
13. anonymous
okay!! and so the next step would be the solve this integral? :/
14. ganeshie8
you really want to sub $$u=x$$ ?
15. anonymous
wait, so we don't sub u=x? :/
16. ganeshie8
what good does that do
17. anonymous
im not sure :/ what would i make the sub be then? :/
18. ganeshie8
unless u hate the letter x ..
19. zepdrix
$$\large\rm u=1+\sqrt[3]{x}$$ or $$\large\rm u=\sqrt[3]{x}$$ I think both of these subs will work out just fine :) Err no, go with the first one. I dunno bout that other guy, he gave me a funny look.
20. zepdrix
Where'd you leave off food man? :O Where you stuck at?
21. anonymous
lol im confused:/ so u = x ?
22. Jhannybean
Oh what about changing this to arctan function..
23. zepdrix
$\large\rm u=1+\sqrt[3]{x},\qquad\qquad du=\frac{1}{3x^{2/3}}dx$Do you understand how to differentiate that u? :o
24. anonymous
i think so :) what is next?
25. zepdrix
Now you have to apply a bunch of sneaky little tricks!
26. anonymous
how do i do that?? :O
27. anonymous
just differentiate what you have and try to finish it
28. zepdrix
Isolate the dx, that's one of the things we're substituting stuff in for after all.$\large\rm 3x^{2/3}du=dx$I'm gonna use rules of exponents to write it like this:$\large\rm 3(x^{1/3})^2du=dx$
29. anonymous
that means $$\large x^\frac{1}{3}$$
30. zepdrix
We can sub something in for that x^(1/3) by using our equation involving u.
31. zepdrix
$$\large\rm u=1+x^{1/3}\quad\to\quad x^{1/3}=u-1$$
32. zepdrix
So there is our dx,$\large\rm 3(u-1)^2du=dx$
33. zepdrix
Was that super confusing? :o
34. Jhannybean
Oh why did I miss this lol D'oh!
35. anonymous
As I have said earlier, you have sufficient knowledge to tackle this problem. Just see to it that you finish what you started so you can learn different techniques along the way.
36. anonymous
okay, haha yes a bit confusing but i think i follow.. sort of…. :P what do i do next?
37. zepdrix
Substitute in your pieces,$\large\rm \color{orangered}{1+\sqrt[3]{x}=u},\qquad\qquad \color{royalblue}{dx=3(u-1)^2du}$$\large\rm \int\limits \frac{1}{\color{orangered}{1+\sqrt[3]{x}}}\color{royalblue}{dx}=?$
38. zepdrix
And from there, it shouldn't be too bad :) You have to expand out the square in the numerator, and you can divide each term by u, and integrate term by term.
39. anonymous
best coaching ever
40. anonymous
okay, er so do i get this? $\frac{ 3x ^{2/3} }{ 2 } - 3\sqrt[3]{x} + 3\log(\sqrt[3]{x}+1) + C$
41. anonymous
@zepdrix ?
42. zepdrix
sec, doing some calculations :)
43. anonymous
okie :)
44. zepdrix
What did you get after integrating in u? I feel like you're missing a 2 in the middle maybe.$\large\rm =3\left(\frac{1}{2}u^2-2u+\ln u\right)$Something similar or no? :o We can substitute back in a sec, I'm just curious if yours looks the same up to this point.
45. anonymous
yes, i have that :
46. anonymous
:)
47. zepdrix
When you get to this point, you normally have two options: ~find new boundary values for your integral, in terms of u, instead of x. ~or undo your substitution and use the original values for integrating. I have a feeling... that with this monster it's going to be easier to get new boundaries for u as opposed to subbing back in those weird x's.$\large\rm x=0 \qquad\to\qquad u=?$$\large\rm x=1\qquad\to\qquad u=?$
48. anonymous
ohh okay :) u=1 , u=2 ?
49. zepdrix
$\large\rm =3\left(\frac{1}{2}u^2-2u+\ln u\right)_1^2$Mmm sounds good!
50. anonymous
so now i plug in? and i get 3((2-4+ln(2) -(1/2-2+ln(1)) ?
51. anonymous
so 3(-2+ln(2) - 1/2 +2 -ln(1) = 3(-1/2 +ln(1) ?
52. zepdrix
53. anonymous
oh oops so it should be 3(-1/2 + ln(2) + ln(1)) ?
54. zepdrix
You could go a tad further with it if you wanted, ln(1)=0. $\large\rm =ln(2^3)-\frac{3}{2}$ but whatever, yay good job \c:/
55. anonymous
ooh okay, so would that be my solution?
56. zepdrix
Yes, that or a decimal value, depending on which your teacher prefers.
57. anonymous
ooh yay!! thanks so much!! to all of you!!:) | 2017-01-20 03:55:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7033671736717224, "perplexity": 9061.460819254155}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00367-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://plainmath.net/2359/simplify-expression-express-rational-exponents-positive-displaystylesqrt | Simplify the expression and express the answer using rational exponents. Assume that all letters denote positive numbers. displaystylesqrt{{{x}^{3}}}
Question
Simplify the expression and express the answer using rational exponents. Assume that all letters denote positive numbers.
$$\displaystyle\sqrt{{{x}^{3}}}$$
2020-11-27
Concept used:
If a is real number n is a positive integer and $$\displaystyle{\sqrt[{{n}}]{{{a}^{m}}}}\ \text{is a real number then the rational exponent expression}\ \displaystyle{\sqrt[{{n}}]{{{a}^{m}}}}\ \text{is equivalent to radical expression}\ \displaystyle{a}^{{\frac{m}{{n}}}}.$$
The above statement can be express as,
$$\displaystyle{\sqrt[{{n}}]{{{a}^{m}}}}={a}^{{{m}\text{/}{n}}}$$
Calculation:
The given expression is $$\displaystyle\sqrt{{{x}^{3}}}.$$
The property of nth $$\displaystyle{\sqrt[]{}}$$ is,
$$\displaystyle{\sqrt[{{n}}]{{{a}^{m}}}}={a}^{{{m}\text{/}{n}}}$$
Substitute 2 for n, 3 for m and x for a in the above equation.
$$\displaystyle{\sqrt[{{2}}]{{{x}^{3}}}}={\left({x}^{{{1}\text{/}{2}}}\right)}^{3}$$
$$\displaystyle={x}^{{{3}\text{/}{2}}}$$
Hence, the solution of the expression $$\displaystyle\sqrt{{{x}^{3}}}{i}{s}{x}^{{{3}\text{/}{2}}}$$
Relevant Questions
Simplifying Expressions Involving Radicals Simplify the expression and express the answer using rational exponents. Assume that all letters denote positive numbers.
$$\displaystyle{\frac{{\sqrt{{{4}}}{\left\lbrace{x}^{{{7}}}\right\rbrace}}}{{\sqrt{{{4}}}{\left\lbrace{x}^{{{3}}}\right\rbrace}}}}$$
Simplifying Expressions Involving Radicals Simplify the expression and express the answer using rational exponents. Assume that all letters denote positive numbers.
$$\displaystyle{\frac{{\sqrt{{{3}}}{\left\lbrace{8}{x}^{{{2}}}\right\rbrace}}}{{\sqrt{{{x}}}}}}$$
Simplify the expression and express the answer using rational exponents. Assume that all letters denote positive numbers.
$$\frac{\sqrt[3]{8x^{2}}}{\sqrt{x}}$$
Simplify the expression and express the answer using rational exponents. Assume that all letters denote positive numbers.
a) $$r^{^1/_6}\ r^{^5/_6}$$
b) $$a^{^3/_5}\ a^{^3/_{10}}$$
Simplifying Expressions Involving Radicals Simplify the expression and express the answer using rational exponents. Assume that all letters denote positive numbers. NKS $$\displaystyle\sqrt{{{x}^{{{5}}}}}{x}{5}$$
Simplify the expression and express the answer using rational exponents. Assume that all letters denote positive numbers. $$\sqrt{x^{5}}$$
The given expression using rational exponents. Then simplify and convert back to radical notation. Assume that all variables represent positive real numbers.
Given:
The expression is $$\displaystyle\sqrt{{{81}{a}^{12}{b}^{20}}}$$
$$\displaystyle\sqrt{{{16}{x}}}+\sqrt{{{x}^{5}}}$$
a) $$\displaystyle\sqrt{{{6}}}{\left\lbrace{y}^{{{5}}}\right\rbrace}\sqrt{{{3}}}{\left\lbrace{y}^{{{2}}}\right\rbrace}$$
b) $$\displaystyle{\left({5}\sqrt{{{3}}}{\left\lbrace{x}\right\rbrace}\right)}{\left({2}\sqrt{{{4}}}{\left\lbrace{x}\right\rbrace}\right)}$$
An expression: $$\displaystyle\sqrt{{3}}{\left(\sqrt{{27}}-\sqrt{{3}}\right)}$$ | 2021-05-12 01:07:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9965085983276367, "perplexity": 667.5750759829341}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991693.14/warc/CC-MAIN-20210512004850-20210512034850-00629.warc.gz"} |
https://math.stackexchange.com/questions/375270/size-of-largest-prime-factor?rq=1 | # Size of largest prime factor
It is well known and easy to prove that the smallest prime factor of an integer $n$ is at most equal to $\sqrt n$. What can be said about the largest prime factor of $n$, denoted by $P_1(n)$? In particular:
What is the probability that $P_1(n)>\sqrt n$ ?
More generally, what is the expected value of the size of $P_1(n)$, measured by $\frac{\log P_1(n)}{\log n}$ ?
• Size of the largest prime of $p$ is $p$ itself. Where $p$ is a prime.This is obvious, though. Apr 28, 2013 at 14:50
Take the negation and this is a very well-known question: what is the probability that all prime factors of $n$ are $\le \sqrt{n}$? The answer is known quite generally: for any real $u\ge 1$, the probability that the prime factors of $n$ are $\le n^{1/u}$ is given by the Dickman–de Bruijn rho function, defined by a delay-differential equation. For $u=2$ we have $\rho(u) = 1-\log 2$, as in Ross Millikan's answer, but there is a very easy calculation that gives this particular case:
$$\#\{n \le x: P_1(n) > \sqrt{n}\} = \sum_{p} \#\{n \le x, n < p^2: p \mid n\} = \sum_{p\le \sqrt{x}} (p-1) + \sum_{p > \sqrt{x}} \lfloor x/p \rfloor \\ = x \log 2 + O(x/\log x),$$
where the main term comes from Mertens' theorem on $\sum_p {1/p}$ and the error terms can be deduced from the Prime Number Theorem (or Chebyshev's upper bound on $\pi(x)$).
Here, by convention, $p$ is assumed to only take prime values. The reason this is so simple is that no $n$ here can have more than one prime factor $> \sqrt{n}$.
The answer to your second question is known as the Golomb-Dickman constant. Wikipedia gives it as about $0.62433$, but I doubt anything is known about its rationality, say.
• Very nice, thanks.
– lhf
Apr 29, 2013 at 0:58
• Is there something you could suggest to further read about it?, it feels trivial but I get lost as to why it looks like $\sum_{p\le \sqrt{x}} (p-1) + \sum_{p > \sqrt{x}} \lfloor x/p \rfloor$ and either how Mertens/PNT give the estimates, thanks. Mar 14, 2020 at 16:57
• @DanielD. If $p \le \sqrt{x}$ then the $n < p^2$ condition dominates and there are exactly $p-1$ multiples of $p$ up to (not including) $p^2$. If $p > \sqrt{x}$ then the $n \le x$ condition dominates and the summand counts multiples of $p$ up to $x$. Mar 14, 2020 at 17:12
• Thanks again that was useful Mar 22, 2020 at 0:02
In Hans Riesel, Prime Numbers and Computer Methods for Factorization, he gives a few approaches to largest and second largest prime factor. On pages 157-158, he gives a heuristic for a "typical" factorization, that suggests the largest gives $$\log P_1 / \log n \approx 1 - 1/e \approx 0.6321,$$ $$\log P_2 / \log n \approx (1 - 1/e) / e \approx 0.2325.$$ On page 161 he mentions that Knuth and Trabb-Pardo get $0.624, \; \; 0.210$ with a more rigorous argument. This is 1976, Theoretical Computer Science, volume 3, pages 321-348. Analysis of a Simple Factorization Algorithm. So I would say you want to get a copy of Knuth and Trabb-Pardo, which is reproduced, with later comments, in KNUTH
He then presents the Erdos-Kac theorem on pages 158-159, finally giving probability distribution curves for the three largest prime factors on page 163. These graphs would be what I call "cumulative distribution functions," being the integral of the "probability distribution function." These are also taken from Knuth and Trabb -Pardo. Let me make a jpeg.
KNOTE: The table on page 163 of $\rho_1(\alpha)$ agrees exactly with the table of $\rho(u)$ in Erick's link on the Dickman-de Bruijn function. So, I think you have a winner.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
• I should have looked in Riesel. I had tried Pomerance but with not much luck. Thanks.
– lhf
Apr 29, 2013 at 0:59
In Mathworld it states that the probability that $P_1(n) \gt \sqrt n$ is $\log 2$. The first few rough numbers are given in OEIS A064052 but there are no references.
• Ah, I missed that when I searched. Thanks.
– lhf
Apr 29, 2013 at 0:52
Posting my first Math-StackExchange answer...
According to this section of Wikipedia, due to Dixon's theorem, the probability of largest prime factor of $$n$$ to be less than $$n^{1/m}$$ is approximately $$m^{-m}$$ for any real $$m \ge 1$$.
So probability of largest prime factor to be less than $$\sqrt n = n^{1/2}$$ is approximately $$2^{-2} = 1/4 = 0.25$$. To be less than $$\sqrt[3]n = n^{1/3}$$ is approximately $$3^{-3} = 1/27 \approx 0.037$$.
I don't know the details of this theorem, I just found this quotation in Wikipedia and thought it might be useful for you. Also I don't know how approximate is this formula.
I tried to check this formula experimentally and wrote Python code for that (using Pollard-Rho and Fermat algorithms). Don't know if according to rules it is allowed to post code here on Math-StackExchange, so providing just links:
You can see (run) my code in action here (and here is a copy of my code just in case if first link is broken, second link is not runnable).
Results for 10K checked 64-bit numbers here:
Checked nums: 10242
Expected: 1.0: 1.00000, 1.5: 0.54433, 2.0: 0.25000, 2.5: 0.10119, 3.0: 0.03704, 3.5: 0.01247, 4.0: 0.00391, 4.5: 0.00115, 5.0: 0.00032, 5.5: 0.00008
Actual: 1.0: 1.00000, 1.5: 0.51541, 2.0: 0.23203, 2.5: 0.09008, 3.0: 0.03202, 3.5: 0.01021, 4.0: 0.00321, 4.5: 0.00105, 5.0: 0.00038, 5.5: 0.00005
Here are pairs of m (from formula above) and probability. So Expected (by formula above) is close to Actual (experimental with factoring of 64-bit numbers), especially close for larger m. Maybe formula is more precise for larger than 64-bit numbers that I checked or for more amount of tested numbers. | 2022-08-10 17:01:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.909109354019165, "perplexity": 201.76459175694475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00750.warc.gz"} |
https://mcrl2.org/web/user_manual/tools/release/pbessolvesymbolic.html | # pbessolvesymbolic¶
This is a solving tool for parameterised Boolean equation systems (.pbes extension) that is based on a parity game exploration technique that utilises symbolic representations. The symbolic exploration works very similar to the one implemented by lpsreach, where it is described in great detail. The main difference is that the underlying PBES is first transformed into a standard recursive format (SRF), which is in some sense very similar to a linear process.
Next, we describe useful options that are exclusive to pbessolvesymbolic. One option that can be useful to further refine the dependencies of transition groups is –split-conditions, which introduces new transition groups based on the structure of the SRF pbes. Generally, option 1 is safe, but options 2 and 3 can yield an infinite PBES due to the conditions becoming weaker.
Similarly to pbessolve the pbessolvesymbolic tool also contains various partial solving strategies that attempt to optimistically solve the intermediate parity games to present the solution (and terminate) early. These can be enabled with the –solve-strategy option.
## Limitations¶
Currently, pbessolvesymbolic can not provide counter examples when the property does not hold, and solving PBESs with counter example information is extremely slow.
## Manual page for pbessolvesymbolic¶
### Usage¶
pbessolvesymbolic [OPTION]... [INFILE [OUTFILE]]
### Description¶
Solves PBES from INFILE. If INFILE is not present, stdin is used. The PBES is first instantiated into a parity game, which is then solved using Zielonka’s algorithm.
### Command line options¶
--cached
use transition group caching to speed up state space exploration
--chaining
reduce the amount of breadth-first iterations by applying the transition groups consecutively
--groups[=GROUPS]
‘none’ (default) no summand groups
‘used’ summands with the same variables are joined ‘simple’ summands with the same read/write variables are joined a user defined list of summand groups separated by semicolons, e.g. ‘0; 1 3 4; 2 5’
--info
print read/write information of the summands
--lace-dqsize[=NUM]
set length of Lace task queue (default 1024*1024*4)
--lace-stacksize[=NUM]
set size of program stack in kilobytes (0=default stack size)
--lace-workers[=NUM]
set number of Lace workers (threads for parallelization), (0=autodetect, default 1)
--max-iterations[=NUM]
limit number of breadth-first iterations to NUM
-m[NUM] , --memory-limit[=NUM]
Sylvan memory limit in gigabytes (default 3)
--print-nodesize
print the number of LDD nodes in addition to the number of elements represented as ‘elements[nodes]’
-QNUM , --qlimit=NUM
limit enumeration of quantifiers to NUM iterations. (Default NUM=1000, NUM=0 for unlimited).
--reorder[=ORDER]
‘none’ (default) no variable reordering
‘random’ variables are put in a random order ‘a user defined permutation e.g. ‘1 3 2 0 4’
--reset
set constant values when introducing parameters
-rNAME , --rewriter=NAME
use rewrite strategy NAME:
jitty
jitty rewriting
jittyc
compiled jitty rewriting
jittyp
jitty rewriting with prover
--saturation
reduce the amount of breadth-first iterations by applying the transition groups until fixed point
-sNUM , --solve-strategy=NUM
Use solve strategy NUM. All strategies except 0 periodically apply on-the-fly solving, which may lead to early termination.
0
No on-the-fly solving is applied
1
Detect solitair winning cycles.
2
Detect solitair winning cycles with safe attractors.
3
Detect forced winning cycles.
4
Detect forced winning cycles with safe attractors.
5
Detect fatal attractors.
6
Detect fatal attractors with safe attractors.
7
Solve subgames using a Zielonka solver.
-c[NUM] , --split-conditions[=NUM]
split conditions to obtain possibly smaller transition groups
0 (default) no splitting performed. 1 only split disjunctive conditions. 2 also split conjunctive conditions into multiple equations which often yield more reachable states. 3 alternative split for conjunctive conditions where even more states can become reachable.
--timings[=FILE]
append timing measurements to FILE. Measurements are written to standard error if no FILE is provided
-t , --total
make the SRF PBES total
#### Standard options¶
-q , --quiet
do not display warning messages
-v , --verbose
display short intermediate messages
-d , --debug
display detailed intermediate messages
--log-level=LEVEL
display intermediate messages up to and including level
-h , --help
display help information
--version
display version information
--help-all
display help information, including hidden and experimental options
Wieger Wesselink | 2023-03-25 08:22:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48110079765319824, "perplexity": 10631.953058181285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945317.85/warc/CC-MAIN-20230325064253-20230325094253-00091.warc.gz"} |
https://www.zbmath.org/?q=an%3A0609.76018 | # zbMATH — the first resource for mathematics
The uniqueness of Hill’s spherical vortex. (English) Zbl 0609.76018
The authors study the free boundary problem
$r(\frac{1}{r}\psi_ r)_ r+\psi_{zz}= \begin{cases} -\lambda r^ 2f_ 0(\psi) &\text{ in $$A;$$} \\ 0 &\text{ in $$\Pi \setminus A,$$}\end{cases}$ $$\psi |_{r-0}=-k,\quad |_{\partial A}=0$$ together with certain asymptotics at infinity.
Here $$\Pi =\{(r,z)|$$ $$r>0$$, $$z\in {\mathbb{R}}\}$$, $$f_ 0\geq 0$$, and $$\psi$$ is a Stokes stream function in cylindrical co-ordinates (no dependence on $$\theta)$$. The set $$A\subset \Pi$$ is bounded and open, but a priori unknown. A special case of the problem is Hill’s problem, in which an explicit solution is known. It is proven that any weak solution to the problem is the explicit solution modulo a translation in z. Such solutions may be obtained as local maximizers of functional.
Reviewer: G.Warnecke
##### MSC:
76B47 Vortex flows for incompressible inviscid fluids 35J25 Boundary value problems for second-order elliptic equations 35R35 Free boundary problems for PDEs
Full Text:
##### References:
[2] Agmon, S., Douglis, A., & Nirenberg, L., Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions. I · Zbl 0093.10401 · doi:10.1002/cpa.3160120405 [3] Ambrosetti, A., & Mancini, G., On some free boundary problems. In Recent contributions to nonlinear partial differential equations (edited by H. Berestycki & H. Brézis). Pitman, 1981. · Zbl 0477.35084 [4] Amick, C. J., & Fraenkel, L. E., The uniqueness of Norbury’s perturbation of Hill’s spherical vortex. To appear. · Zbl 0694.76011 [5] Amick, C. J., & Fraenkel, L. E., Note on the equivalence of two variational principles for certain steady vortex rings. To appear. · Zbl 0694.76011 [6] Benjamin, T. B., The alliance of practical and analytical insights into the nonlinear problems of fluid mechanics. In Applications of methods of functional analysis to problems of mechanics, Lecture notes in math. 503. Springer, 1976. [7] Berestycki, H., Some free boundary problems in plasma physics and fluid mechanics. In Applications of nonlinear analysis in the physical sciences (edited by H. Amann, N. Bazley & K. Kirchgässner). Pitman, 1981. · Zbl 0503.76127 [8] Caffarelli, L. A., & Friedman, A., Asymptotic estimates for the plasm · Zbl 0466.35033 · doi:10.1215/S0012-7094-80-04743-2 [9] Chandrasekhar, S., Hydrodynamic and hydromagnetic stability. Oxford, 1961. · Zbl 0142.44103 [10] Ekeland, I., & Temam, R., Convex analysis and variational problems. North-Holland, 1976. · Zbl 0322.90046 [11] Esteban, M. J., Nonlinear elliptic problems in strip-like domains: symmetry of positive vortex rings. Nonlinear Analysis, Theory, · Zbl 0513.35035 · doi:10.1016/0362-546X(83)90090-1 [12] Fraenkel, L. E., & Berger, M. S., A global theory of steady vortex rings in an · Zbl 0282.76014 · doi:10.1007/BF02392107 [13] Friedman, A., & Turkington, B., Vortex rings: existence and asymptotic estimates. · Zbl 0497.76031 · doi:10.1090/S0002-9947-1981-0628444-6 [14] Gidas, B., Ni, W.-M., & Nirenberg, L., Symmetry and related properties via the maximum prin · Zbl 0425.35020 · doi:10.1007/BF01221125 [15] Gilbarg, D., & Trudinger, N. S., Elliptical partial differential equations of second order. Springer, 1977. · Zbl 0361.35003 [16] Giles, J. R., Convex analysis with application in differentiation of convex functions. Pitman, 1982. · Zbl 0486.46001 [17] Hill, M. J. M., On a spherical vortex. Philos. Trans. Roy. Soc. London A 185 (1894), 213–245. · JFM 25.1471.01 · doi:10.1098/rsta.1894.0006 [18] Keady, G., & Kloeden, P. E., Maximum principles and an application to an elliptic boundary-value problem with a discontinuous nonlinearity. Research report, Dept. of Math., University of Western Australia, 1984. · Zbl 0647.35029 [19] Kinderlehrer, D., & Stampacchia, G., An introduction to variational inequalities and their applications. Academic Press, 1980. · Zbl 0457.35001 [20] Ni, W.-M., On the existence of global vortex r · Zbl 0457.76020 · doi:10.1007/BF02797686 [21] Nirenberg, L., On elliptic partial differential equations. Ann. Scuola Norm. Sup. Pisa (3) 13 (1959), 115–162. · Zbl 0088.07601 [22] Norbury, J., A steady vortex ring close to Hill’s spherical vortex. Proc · Zbl 0256.76016 · doi:10.1017/S0305004100047083 [23] Norbury, J., A family of steady vort · Zbl 0254.76018 · doi:10.1017/S0022112073001266 [24] Serrin, J., A symmetry problem in potential theory. Ar · Zbl 0222.31007 · doi:10.1007/BF00250468
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-03-05 23:23:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6248921751976013, "perplexity": 3206.5534423935314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373761.80/warc/CC-MAIN-20210305214044-20210306004044-00214.warc.gz"} |
https://www.scienceforums.net/topic/68449-make-calcium-metal/#comment-711397 | Make calcium metal
Recommended Posts
I read somewhere that you can make calcium metal by reacting powdered aluminium and calcium sulphate ( from plaster ) in a typical thermite reaction. It just so happens that I want some calcium metal right now, so I thought I'd try it. I crushed up some already set plaster (calcium sulphate nonahydrate I'm pretty sure) and now I plan on heating it until it drys to the dihydrate or anhydride. Which hydrate ( or the anhydride ) do you think would work best? I would guess the anhydride, the water is just one more thing to potentially ruin the batch. I'm really hoping this will work. Anyways, I'll get to it and keep you posted. No matter how trivial it is, I'd like to have your input on this.
Share on other sites
I read somewhere that you can make calcium metal by reacting powdered aluminium and calcium sulphate ( from plaster ) in a typical thermite reaction. It just so happens that I want some calcium metal right now, so I thought I'd try it. I crushed up some already set plaster (calcium sulphate nonahydrate I'm pretty sure) and now I plan on heating it until it drys to the dihydrate or anhydride. Which hydrate ( or the anhydride ) do you think would work best? I would guess the anhydride, the water is just one more thing to potentially ruin the batch. I'm really hoping this will work. Anyways, I'll get to it and keep you posted. No matter how trivial it is, I'd like to have your input on this.
Learn to count water molecules first.
Share on other sites
I read somewhere that you can make calcium metal by reacting powdered aluminium and calcium sulphate ( from plaster ) in a typical thermite reaction. It just so happens that I want some calcium metal right now, so I thought I'd try it. I crushed up some already set plaster (calcium sulphate nonahydrate I'm pretty sure) and now I plan on heating it until it drys to the dihydrate or anhydride. Which hydrate ( or the anhydride ) do you think would work best? I would guess the anhydride, the water is just one more thing to potentially ruin the batch. I'm really hoping this will work. Anyways, I'll get to it and keep you posted. No matter how trivial it is, I'd like to have your input on this.
I just dehydrated the caso4 by heating with a small butane torch (or at least I hope I dehydrated it). The next step would be to mix with al and light it up... but nature had to ruin the fun and start it raining. I'll wait a while for it to stop. In the meantime, I might do a few purity checks on the dehydrated caso4, grind it up a little finer.This is completely random, but when I was heating some stainless steel mesh with the torch, I found that if you press the head of it against the mesh the flame will go through... If you try to pull it away again, the actuall flame will stay on the other side of the mesh. Try it. It looks pretty cool.Oh, sorry I can't get any pictures up for now, my camera is so stubborn.
Learn to count water molecules first.
Did I make an error? It wouldn't surprise me too much. Would you mind pointing it out to me, I don't see it? I've never had an excuse to do anything with calcium sulphate before, so I really don't know too much about it. You may notice I said "pretty sure". Again, thanks for your input.
Share on other sites
Well, it stopped raining a while ago so I attempted the experiment. Sorry I couldn't get back until now.Anyways, it went great! I ended up with a dull, oxidized chunk. I did the thermite reaction on a thick steel slab, which likely sucked heat from the reactants until they stopped prematurely. Over half of it burned though, so I took the resultant piece for primitive testing. It gave a metallic sound when tapped against the aforementioned steel slab. It bubbled at a respectable rate when placed in water, slowly turning the solution cloudy white. The solution reacted vigorously with ~38% HCl. The whole chunk reeked of H2S (maybe some leftover sulphides?) . When scratched, it exposed a shiny metal surface. Sounds like calcium! I used 2:1 sulphate to aluminium ratio for the thermite, standard magnesium ribbon ignition. I got photos, but I can't upload them! Ugh... So frustrating!
Share on other sites
Learn to spot italicisation.
Share on other sites
Learn to spot italicisation.
Does italicising it make a difference? Is that some new convention that I didn't learn in chemistry? I felt like italicising it, so what?Please point out the errors, if any, in a way I can make sense of. Thank you.
Share on other sites
Does italicising it make a difference? Is that some new convention that I didn't learn in chemistry? I felt like italicising it, so what?Please point out the errors, if any, in a way I can make sense of. Thank you.
One last thing before we can be sure what you have is calcium: Flame test?
If it is, congratulations!
Share on other sites
!
Moderator Note
I don't think that arguments over formatting really belong in a thread about calcium metal. Back on topic, please.
Share on other sites
OK, back to the topic.
I really think that trying to prepare calcium by a thermite type reaction is likely to be hazardous.
I therefore do not think it should be done by someone who does not pay attention to what they are doing.
So I don't think that it's a good idea for someone who not only gets utterly the wrong hydration number for Calcium sulphate, but can't spot his error even when it's quoted back at him with the mistake highlighted by italicisation.
It's not an issue of formatting.
It's an issue of watching what you are doing, and thinking it through clearly.
I invite you to consider the reaction of a little trapped water with white hot Al or Ca and calculate the volume of gas (be it H2 or H2O) formed.
Share on other sites
John, I thought I italicized that myself, I did notice and consider that however. Doesn't set plaster have 9 h2o's? That's what my chem teacher taught me. November, Nonagon, Ununnonium, they all involve(d) 9 (obvious, I know).
One last thing before we can be sure what you have is calcium: Flame test?
If it is, congratulations!
Sorry I took so long. Anyways, I filtered and boiled down the remaining non chlorinated precipitate, got a nice bright orange from it! I'd like it if some other people could try this, to give it a little more credibility from their potential confirmations.
OK, back to the topic.
I really think that trying to prepare calcium by a thermite type reaction is likely to be hazardous.
I therefore do not think it should be done by someone who does not pay attention to what they are doing.
So I don't think that it's a good idea for someone who not only gets utterly the wrong hydration number for Calcium sulphate, but can't spot his error even when it's quoted back at him with the mistake highlighted by italicisation.
It's not an issue of formatting.
It's an issue of watching what you are doing, and thinking it through clearly.
I invite you to consider the reaction of a little trapped water with white hot Al or Ca and calculate the volume of gas (be it H2 or H2O) formed.
Agreed, it can be dangerous. Mmm... But John my friend, I think you failed to consider that for ignition, I stick a piece of mg in the pile, put a mix of al, s, & kno3 around it, then use a model rocket remote igniter to back up 20 meters before I light any thermite mixture, plus safety goggles that I modded with a uv filter from the eye doctors. Sounds really excessive, I know, but better excessive than molten metal splattered across your face, right? Back at science madness, we didn't point out things in quotes with italics. I'm a complete noob to these science forums conventions.
OK, back to the topic.
I really think that trying to prepare calcium by a thermite type reaction is likely to be hazardous.
I therefore do not think it should be done by someone who does not pay attention to what they are doing.
So I don't think that it's a good idea for someone who not only gets utterly the wrong hydration number for Calcium sulphate, but can't spot his error even when it's quoted back at him with the mistake highlighted by italicisation.
It's not an issue of formatting.
It's an issue of watching what you are doing, and thinking it through clearly.
I invite you to consider the reaction of a little trapped water with white hot Al or Ca and calculate the volume of gas (be it H2 or H2O) formed.
John, there are nerdy GIRLS out here too.
Share on other sites
OK, so I think that you are not paying a lot of attention to what you do and therefore shouldn't be playing with thermite.
Your response is to point out that you don't know what you wrote and didn't realise you should check.
" I thought I italicized that myself,"
Share on other sites
OK, so I think that you are not paying a lot of attention to what you do and therefore shouldn't be playing with thermite.
Your response is to point out that you don't know what you wrote and didn't realise you should check.
" I thought I italicized that myself,"
To be fair, John Cuthber, people don't expect their own quotes to be tampered with in a reply. To insist that chilled_flourine should pay more attention in this instance is like telling someone you've hidden something in the room they're in, and then chastising them when they fail to check their own pockets.
Share on other sites
To be fair, John Cuthber, people don't expect their own quotes to be tampered with in a reply. To insist that chilled_flourine should pay more attention in this instance is like telling someone you've hidden something in the room they're in, and then chastising them when they fail to check their own pockets.
Thank you phi, for the first bit of support I have been given in my short time here.
OK, so I think that you are not paying a lot of attention to what you do and therefore shouldn't be playing with thermite.
Your response is to point out that you don't know what you wrote and didn't realise you should check.
" I thought I italicized that myself,"
John, "I thought I italicized that myself" was not my justification. The safety precautions I named are. Keep in mind that I was only using a puny 6 gram pile, not exactly the mother of all bombs. If I was truly as ignorant as you think I am, nothing you could say would influence my decisions, therefore you should have given up already. Are you beginning to doubt even yourself? John, if I call a beer a giddleboop, and you call it a beer, but the bartender gives us both beers, does it really matter what we call them if we just want a beer? I would say no. I admit it, I made a mistake. My chemistry teacher was wrong, and since I said what she did, that makes me equally wrong. No, I'm not trying to take the blame away from myself. I could have found out for sure. I just presumed a chem teacher was a valid source, and in this case I was wrong. Either way, I made calcium, which was the purpose of all this. I succeeded (got my metaphorical beer (no, I am not an alcoholic)), debate that.
Share on other sites
Wow, really? With a caesium on top? Even if I wanted to, I couldn't, and the only pics I got were of the glowing slag, it wouldn't satisfy your pyromania one bit.
Share on other sites
What about a current pic of the calcium metal? I've got some we can compare with.
Share on other sites
What about a current pic of the calcium metal? I've got some we can compare with.
Hmm... You do remember that I dissolved it all, right? Guess I could make some more. I already told you that my camera's upload link broke or something, so I "couldn't, even if I wanted to". It was just a dull gray, oxidized chunk anyways, could you really tell if it was calcium or not from such a picture? Where do you get your calcium, elementcollector?
United Nuclear! $10 or something, I ampouled some the other day. Still have plenty left for reactions, etc. I wonder if I could reduce some rare earths with this stuff? EDIT: Incidentally, how did you dehydrate your CaSO4? I blowtorched some of mine in a ceramic bowl, and it didn't *seem* to do anything. Edited by elementcollector1 Link to comment Share on other sites United Nuclear!$10 or something, I ampouled some the other day. Still have plenty left for reactions, etc.
I wonder if I could reduce some rare earths with this stuff?
EDIT: Incidentally, how did you dehydrate your CaSO4? I blowtorched some of mine in a ceramic bowl, and it didn't *seem* to do anything.
Personally, I just heat it up in a furnace, or if I'm short for time I blowtorch it until its red hot. The temporary glow is what tells me it is done.United nuclear is great, but I try to avoid ordering from there. I used to add sulphuric and distill, but that's dangerous, and I'm out of good sulphuric now.
Edited by chilled_fluorine
Share on other sites
• 1 month later...
I don't think any Ca metal is made at all, not just because Ca is more electropositive than Al (there can be exceptions to that, like making Na from Mg), but because of different reasons.
Firstly, let's assume Ca metal was made. Aluminium sulfate would be a by-product. Usually, it is quite stable, but we're talking about thermite temperatures here. At that temperature, it will decompose to form Al2O3, SO2 and oxygen. Both the SO2 and oxygen can and will react with any calcium formed, especially at such a high temperature.
Second, remember that although sulfates are not typically regarded as oxidizers, they can act as oxidizing agents at high temps. For example, Na2SO4 is reduced to Na2S easily in a furnace when roasted with charcoal. It is far more energetically favourable for aluminium metal to reduce the sulfate to sulfide, instead of actually "stealing" off the sulfate anion. Any potential calcium metal formed will also react with the sulfate quickly to form oxides of calcium and sulfides.
So, at the end, instead of calcium metal, there is a mix of calcium and aluminium sulfides and oxides.
Share on other sites
I don't think any Ca metal is made at all, not just because Ca is more electropositive than Al (there can be exceptions to that, like making Na from Mg), but because of different reasons.
Firstly, let's assume Ca metal was made. Aluminium sulfate would be a by-product. Usually, it is quite stable, but we're talking about thermite temperatures here. At that temperature, it will decompose to form Al2O3, SO2 and oxygen. Both the SO2 and oxygen can and will react with any calcium formed, especially at such a high temperature.
Second, remember that although sulfates are not typically regarded as oxidizers, they can act as oxidizing agents at high temps. For example, Na2SO4 is reduced to Na2S easily in a furnace when roasted with charcoal. It is far more energetically favourable for aluminium metal to reduce the sulfate to sulfide, instead of actually "stealing" off the sulfate anion. Any potential calcium metal formed will also react with the sulfate quickly to form oxides of calcium and sulfides.
So, at the end, instead of calcium metal, there is a mix of calcium and aluminium sulfides and oxides.
Disagree.
Share on other sites
Disagree.
Why don't you agree? You can't just say "disagree" without explaining your reasoning. Care to explain why it would be favourable for calcium metal to form and not be destroyed instantly?
By the way, another reason why no calcium metal is formed: The exothermic reaction. The reduction of Ca2+ by Al, if it even happens at all, would be NOWHERE NEAR as exothermic as the CaSO4+Al thermite. But the oxidation of aluminium from another oxidizing agent, the sulfate ion itself, which loses oxygen at thermite temperatures even without a reducing agent, is much more likely to be the cause of the exothermic reaction.
Yet another reason. The reason why magnesium reacts with sodium hydroxide is because of covalent bonding. Magnesium partially covalently bonds to its oxygen, along with ionic bonding. Sodium ionically bonds strongly to oxygen, but hardly covalently bonds at all. So the additional covalent bonding formed is what makes this reaction energetically favourable. But think of calcium. I would think that it would also partially covalently bond to its oxygen, along with a strong ionic bonding. So Ca2+ oxidation of aluminium would not be energetically favourable (or favourable at all, unless you distil off the molten calcium metal).
The formation of H2S that you observed is perfectly consistent with my hypothesis that sulfides are formed instead of calcium metal. But you have one more test, light it with a torch. If any calcium metal is formed, it would burn with an orange flame after you stop heating it with the torch. A flame test will only show the presence of calcium, not whether it is in the form of a metal or a compound, so that would not work.
Edited by weiming1998
Share on other sites
Why don't you agree? You can't just say "disagree" without explaining your reasoning. Care to explain why it would be favourable for calcium metal to form and not be destroyed instantly?
By the way, another reason why no calcium metal is formed: The exothermic reaction. The reduction of Ca2+ by Al, if it even happens at all, would be NOWHERE NEAR as exothermic as the CaSO4+Al thermite. But the oxidation of aluminium from another oxidizing agent, the sulfate ion itself, which loses oxygen at thermite temperatures even without a reducing agent, is much more likely to be the cause of the exothermic reaction.
Yet another reason. The reason why magnesium reacts with sodium hydroxide is because of covalent bonding. Magnesium partially covalently bonds to its oxygen, along with ionic bonding. Sodium ionically bonds strongly to oxygen, but hardly covalently bonds at all. So the additional covalent bonding formed is what makes this reaction energetically favourable. But think of calcium. I would think that it would also partially covalently bond to its oxygen, along with a strong ionic bonding. So Ca2+ oxidation of aluminium would not be energetically favourable (or favourable at all, unless you distil off the molten calcium metal).
The formation of H2S that you observed is perfectly consistent with my hypothesis that sulfides are formed instead of calcium metal. But you have one more test, light it with a torch. If any calcium metal is formed, it would burn with an orange flame after you stop heating it with the torch. A flame test will only show the presence of calcium, not whether it is in the form of a metal or a compound, so that would not work.
The only proof I need is what I saw. And what I saw was a chunk of calcium. Try it yourself, but I have better things to do than explain myself to doubters. Until you've tried it (and even then you can't be sure), you can't say anything against it. Btw, I dissolved it all in hcl, 'member?
Share on other sites
The only proof I need is what I saw. And what I saw was a chunk of calcium. Try it yourself, but I have better things to do than explain myself to doubters. Until you've tried it (and even then you can't be sure), you can't say anything against it. Btw, I dissolved it all in hcl, 'member?
People "see" UFO's but that doesn't make them true; did you assay it? Explaining yourself to doubters is an essential part of what science is all about...ever heard of peer review?
Create an account
Register a new account | 2023-02-04 19:25:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49578461050987244, "perplexity": 1872.4611780769583}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500151.93/warc/CC-MAIN-20230204173912-20230204203912-00298.warc.gz"} |
http://jan.varwig.org/archive/angularjs-views-vs-directives | In January I published a half-finished thought-piece on the old question of how to do nested views in AngularJS. That article was originally a reply to a thread on the AngularJS mailing list. In retrospect and after some feedback I want to elaborate on some points.
I argued that people should not look to views to build their app but make use of directives instead. Views are a mechanism, provided by AngularJS through the $route service to bind a URL to a controller instance and additional parameters as well as a Template that is injected into the DOM at a point designated by the ngView directive. There is also the UI-Router module from Angular-UI that provides the ability to nest views inside of each other to allow a little more elaborate structures than Angular core. In UI-Router the directive that goes into the DOM is named uiView and “views” are called “states” and are a bit more powerful than Angular core view but largely it’s the same concept. ## Original Intention Beginners coming to AngularJS are often confused about how to structure their Apps. AngularJS and the documentation provide little guidance in this regard and someone looking for answers will quickly stumble upon ngView and get the impression that it is the way to structure your app. The original idea behind my post was to direct attention back to directives and to get people to use them for as many things as possible because in a lot of the documentation out there directives treated as last-resort-options for special cases when the opposite is true! Directives are the core building block of an Angular app. Use them to insert whatever structure or behavior you need in your app. Views are merely a shortcut for very simple use-cases. ## Capabilities of Views and Directives There is no difference, everything you can do with nested directives, you can do with nested views This is the main objection I received in response to the original post. The answer to this is both yes and no, depending on your situation. Let’s first look at the case where it’s yes: This would be a fairly simple App without any fancy behavior beyond what is offered by AngularJS built-in directives. You can split your app into a small number of modules, likely representing CRUD behavior on a REST backend, a couple of lists, forms etc. The code required to attach your data to your scope fits neatly into the controllers for each view. You can derive the entire view state of your application from the URL and the data in your models. This architecture breaks down as soon as your try anything more sophisticated. Imagine a couple improvements: For your lists, you want endless scrolling. To implement that you have to write a directive that generates a scrolling container, checks for the scroll position and reloads data as necessary. You might have some collapsible boxes containing filters for your list view and you want the dates in the table be displayed in relative terms. The scroll position, the collapsed-state of the boxes are all part of your view state, but not necessarily something you want to encode in the URL. Tiny directives that update the DOM locally (keeping the relative timestamps up-to-date for example) are even something that can’t be done at all with views. For performing tasks like this you need directives. What’s more important though, is the apps structure. Due to the nature of the DOM, your application is always a tree of components. The entirety of the data in your scopes determines which components are displayed and how they look and behave. The URL however is not a tree, it’s a linear list, thus can only be used to store the state of the list of components from the app root to one leaf-node. You could have /appState/Astate/Bstate/Cstate or /appState/Astate/Bstate/Dstate but not meaningfully represent the states of both C and D in the URL. (Smartass-Warning No 1: Of course you can throw in objections now that you could represent trees in a linear fashion or encode arbitrary byte strings in the URL and represent whatever you want there. But that’s far beyond what $route/UI-Router offer.)
(Smartass-Warning No 2: You could also replace state in the URL with tuples of state (exactly what UI-Router calls “Multiple named views”) whenever you have fixed tuples of components (C and D in the example), but that only holds as long as the tuples are predetermined. But if you do that you could also just treat them as a single component.)
## View-Containers are meaningless, separated from their semantics through the routes.
The other, secondary gripe that I have with UI-Routers nested views is that they violate another core idea of AngularJS: Your DOM is the main place to describe the structure of your app. Reading a template should give you an idea of what goes where. If you want to edit a user, put a <edit-user user="user"/> directive into your template:
• A reader will immediately see what that directive does and what data it depends on.
• If you write the directive correctly it will be location independent, you can place it somewhere else in your app, as long as you pass in a user through the attribute it will work.
Using views litters you templates with meaningless containers, outsourcing the actual purpose of every view into the routes/states defined elsewhere. If you nest routes, the context of every view becomes implicit, it is harder to move them around and the only way to pass data into a view is through the scope.
An example for this scenario: suppose you have the user-editor <edit-user user="user"/>, it will be trivial to edit two users next to each other: <edit-user user="user1"/><edit-user user="user2"/> and a few lines of CSS to arrange them visually and you’re done.
## But URLs make the web what it is, you do want to bind your application state to the URL!
If you rely solely on the URL to store your application state you limit the complexity of what you can store. This is not necessarily bad! Quite the contrary, the simpler your app the better. But be aware of the limitations and implications of your architecture and make decisions like these consciously.
Also, embrace directives, they’re cool.
Directives are cool. | 2017-07-22 04:33:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25999563932418823, "perplexity": 1454.3210619577555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423901.32/warc/CC-MAIN-20170722042522-20170722062522-00515.warc.gz"} |
https://www.physicsforums.com/threads/quick-electric-potential-question.111557/ | # Quick electric potential question
1. Feb 20, 2006
### hoseA
http://img69.imageshack.us/img69/6329/potential4iu.png [Broken]
i thought the electric potenial would be larger at Va than at V0.
Va>V0
Apparently I'm wrong. I thought since R=0 at the origin the electric potential would also be zero. Is that not the case?
Can the electric potential even be determined?
Or am i mixing it up with electric potential energy?
Last edited by a moderator: May 2, 2017
2. Feb 20, 2006
### phucnv87
In the picture you attached, I think that the electric field is uniform
3. Feb 20, 2006
### hoseA
How do u arrive at this conclusion?
4. Feb 20, 2006
### phucnv87
Because the lines of electric force are parallel
5. Feb 20, 2006
### hoseA
=( apparently that's incorrect. It's either V0>VA or "cannot be determined"
I really need to get the answer right... since it's a multiple choice question -- -6.67 is my current score(negative)... if it's right i'll get -3.33 or -10 if it's wrong.
6. Feb 20, 2006
### phucnv87
As the picture shows, we have $$V_0>V_A$$ and because we don't know $$V_A$$, so we can calculate $$V_0$$. If we know $$V_A$$ we can calculate $$V_0$$ by this method $$V_0=V_A+Ex$$ where $$x$$ is the position of point A in the x-axis.
Last edited: Feb 20, 2006
7. Feb 20, 2006
### hoseA
Thanks. That helps. | 2017-07-21 16:46:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7635782957077026, "perplexity": 2290.956192220983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423787.24/warc/CC-MAIN-20170721162430-20170721182430-00128.warc.gz"} |
https://plainmath.net/other/100004-find-two-sets-a-and-b-such-tha | cherzywerzyw7b
2022-12-20
Find two sets A and B such that A is an element of B and $A\subseteq B$. I want to know can A={1,2,3} and B={1,2,3} work
bleustggv
Expert
No, they can't be equal, because (under the normal axioms for sets) a set cannot belong to itself.
kapitlio0z
Expert
$A=\left\{1,2,3\right\}\in \left\{1,2,3,\left\{1,2,3\right\}\right\}$
$A=\left\{1,2,3\right\}\subseteq \left\{1,2,3,\left\{1,2,3\right\}\right\}$
Do you have a similar question?
Recalculate according to your conditions! | 2023-02-05 08:13:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 31, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6004359126091003, "perplexity": 640.9829374811097}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500250.51/warc/CC-MAIN-20230205063441-20230205093441-00536.warc.gz"} |
http://hawaiilibrary.net/AlsoRead.aspx?BookId=3988644 | #jsDisabledContent { display:none; } My Account | Register | Help
### Nonlinear Instability and Sensitivity of a Theoretical Grassland E...
##### By: B. Wang; M. Mu
Description: LASG, Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing 100029, China. Within a theoretical model context, the sensitivity and instability of the grassland ecosystem to finite-amplitude perturbations are studied. A new approach of conditional nonlinear optimal perturbations (CNOPs) is adopted to investigate this nonlinear problem. It is shown that the linearly stable grassland (desert) states can be nonlinearly unstable with finite...
Birgin, E. G., Mart\'\inez, J. M., and Raydan, M.: Nonmonotone spectral projected gradient methods on convex sets, SIAM Journal on Optimization, 10, 1196–1211, 2000.; Birgin, E. G., Mart\'\inez, J. M., and Raydan, M.: Algorithm 813: SPG–software for convex–constrained optimization, ACM Transactions on Mathematical Software, 27, 340–349, 2001.; Birgin, E. G., Mart\'\inez J. M., and Raydan, M.: Inexact Spectral Projected Gradient Methods on Convex Sets, IMA Journal of Nume...
### Investigation of Correlation of the Variations in Land Subsidence ...
##### By: H. R. Nankali; F. Tavakoli; A. Mirzaei; K. Moghtased-azar
Description: Khosro Moghtased-Azar, Surveying Department, Faculty of Civil Engineering, Tabriz University, 51666-14766, Tabriz, Iran. Lake Urmia, a salt lake in the north-west of Iran, plays a valuable role in the environment, wildlife and economy of Iran and the region, but now faces great challenges for survival. The Lake is in immediate and great danger and is rapidly going to become barren desert. As a result, the increasing demands upon groundwater resources due...
Barbero, R. and Moron, V.: Seasonal to decadal modulation of the impact of El Niño-Southern Oscillation on New Caledonia (SW Pacific) rainfall (1950-2010), J. Geophys. Res., 116, D23111, doi:10.1029/2011JD016577, 2011.; Cazelles, B., Chavez, M., Magny, G. C., Guégan, J., and Hales, S.: Time-dependent spectral analysis of epidemiological time-series with wavelets, J. Roy. Soc. Int., 4, 625–636,
### Edge Effect Causes Apparent Fractal Correlation Dimension of Unifo...
##### By: J.-d. Creutin; D. Sempere Torres; J. M. Porrà; R. Uijlenhoet
Description: Chair of Hydrology and Quantitative Water Management, Department of Environmental Sciences, Wageningen University, Wageningen, The Netherlands. Lovejoy and Schertzer (1990a) presented a statistical analysis of blotting paper observations of the (two-dimensional) spatial distribution of raindrop stains. They found empirical evidence for the fractal scaling behavior of raindrops in space, with potentially far-reaching implications for rainfall microphysics...
Bacchi, B., Ranzi, R., and Borga, M.: Statistical characterization of spatial patterns of rainfall cells in extratropical cyclones, J. Geophys. Res., (D101), 26277–26286, 1996.; Cornford, S G.: Sampling errors in measurements of raindrop and cloud droplet concentrations, Meteorol. Mag., 96, 271–282, 1967.; Cox, D R. and Isham, V.: Point processes, Chapman & Hall, London, 188 pp., 1980.; Cressie, N A C.: Statistics for spatial data, John Wiley & Sons, New York, 936 pp., 1...
### Fractal Analysis for the Ulf Data During the 1993 Guam Earthquake ...
##### By: Y. Ida; M. Hayakawa
Description: Department of Electronic Engineering, The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu, Tokyo, 182-8585, Japan. An extremely large earthquake (with magnitude of 8.2) happened on 8 August 1993 near the Guam island, and ultra-low-frequency (ULF) (frequency less than 1 Hz) electromagnetic fields were measured by 3-axis induction magnetometers at an observing station (with the epicentral distance of 65 km) with sampling frequency of 1 Hz. In...
Burlaga, L. F. and Klein, L. W.: Fractal structure of the interplanetary magnetic field, J. Geophys. Res., 91, 347–350, 1986.; Hayakawa, M. and Fujinawa, Y. (Eds.): Electromagnetic Phenomena Related to Earthquake Prediction, Terra Sci. Pub. Co., Tokyo, 667p, 1994.; Gotoh, K., Hayakawa, M., and Smirnova, N.: Fractal analysis of the gromagnetic data obtained at Izu peninsula, Japan in relation to the nearby earthquake swarm of June–August 2000, Nat. Hazards Earth Syst. Sci...
### On the Earth’s Magnetic Field and the Hall Effect : Volume 10, Iss...
##### By: J. E. Allen
Description: University College, Oxford, OX1 4BH, UK. In a recent paper de Paor put forward a new theory of the Earth's magnetic field that depended on the Hall effect as an energy transfer mechanism. The purpose of this paper is to demonstrate that the mechanism invoked is unimportant except in certain gaseous plasmas.
### A Simple Metric to Quantify Seismicity Clustering : Volume 17, Iss...
##### By: J. A. Vallejos; S. D. McKinnon; K. F. Tiampo; N. F. Cho; R. Dominguez; W. Klein
Description: Department of Earth Sciences, University of Western Ontario, London, Canada. The Thirulamai-Mountain (TM) metric was first developed to study ergodicity in fluids and glasses (Thirumalai and Mountain, 1993) using the concept of effective ergodicity, where a large but finite time interval is considered. Tiampo et al. (2007) employed the TM metric to earthquake systems to search for effective ergodic periods, which are considered to be metastable equilibr...
Baiesi, M. and Paczuski, M.: Scale-free networks of earthquakes and aftershocks, Phys. Rev. E, 69, 066106, doi:10.1103/PhysRevE.69.066106, 2004.; de Oliveira, C. R. and Werlang, T.: Ergodic hypothesis in classical statistical mechanics, Rev. Bras. Ensino. Fis., 29(2), 189–201, 2007.; Dieterich, J.: A constitutive law for rate of earthquake production and its application to earthquake clustering, J. Geophys. Res., 99(B2), 2601–2618, 1994.; Farquhar, I. E.: Ergodic theory ...
### Evaluation of Eta Model Seasonal Precipitation Forecasts Over Sout...
##### By: J. L. Gomes; J. F. Bustamante; S. C. Chou
Description: Center for Weather Prediction and Climate Studies – CPTEC, National Institute for Space Research, INPE, Brazil. Seasonal forecasts run by the Eta Model over South America were evaluated with respect to precipitation predictability at different time scales, seasonal, monthly and weekly for one-year period runs. The model domain was configured over most of South America in 40km horizontal resolution and 38 layers. The lateral boundary conditions were take...
### Intermittent Particle Dynamics in Marine Coastal Waters : Volume 2...
##### By: H. Loisel; F. G. Schmitt; P. R. Renosh
Description: University of Lille, UMR 8187, Laboratory of Oceanology and Geosciences, 28 Avenue Foch, 62930 Wimereux, France. Marine coastal processes are highly variable over different space and time scales. In this paper we analyse the intermittency properties of particle size distribution (PSD) recorded every second using a LISST instrument (Laser In-Situ Scattering and Transmissometry). The particle concentrations have been recorded over 32 size classes from 2.5 to...
Agrawal, Y. and Pottsmith, H.: Instruments for particle size and settling velocity observations in sediment transport, Mar. Geol., 168, 89–114, 2000.; Amal, R., Raper, J. A., and Waite, T. D.: Fractal structure of hematite aggregates, J. Colloid Interf. Sci., 140, 158–168, 1990.; Bec, J.: Multifractal concentrations of inertial particles in smooth random flows, J. Fluid Mech., 528, 255–277, 2005.; Boss, E., Pegau, W. S., Gardner, W. D., Zaneveld, J. R. V., Barnard, A. H....
### Obliquely Propagating Electron Acoustic Solitons in Magnetized Pla...
##### By: H. R. Pakzad; K. Javidan
Description: Department of Physics, Bojnourd Branch, Islamic Azad University, Bojnourd, Iran. The problem of small amplitude electron-acoustic solitary waves (EASWs) is discussed using the reductive perturbation theory in magnetized plasmas consisting of cold electrons, hot electrons obeying nonextensive distribution and stationary ions. The presented investigation shows that the presence of nonextensive distributed hot electrons (due to the effects of long-range in...
Mamun, A. A., Shukla, P. K., and Stenflo, L.: Obliquely propagating electron-acoustic solitary waves, Phys. Plasmas, 9, 4 pp., doi:10.1063/1.1462635, 2002.; Amour, R. and Tribeche, M.: Variable charge dust acoustic solitary waves in a dusty plasma with a q-nonextensive electron velocity distribution, Phys. Plasmas, 7 pp., doi:10.1063/1.3428538, 2010.; Anowar, M. G. M. and Mamun, A. A.: Multidimensional instability of electron-acoustic solitary waves in a magnetized plasm...
### Thin Layer Shearing of a Highly Plastic Clay : Volume 13, Issue 6 ...
##### By: G. Gudehus; M. Külzer; A. B. Libreros Bertini; K. Balthasar
Description: Institut für Bodenmechanik und Felsmechanik, Universität Karlsruhe, Germany. Shearing tests with a thin layer of clay between filter slabs render possible large and cyclic deformations with drainage. In the pressure range of 100 kPa they serve to validated visco-hypoplastic constitutive relations. This theory is also confirmed by tests with up to 14 MPa and super-imposed anti-plane cycles. After this kind of seismic disturbance the clay stabilizes if the...
Bauer, E.: Calibration of a comprehensive hypoplastic model for granular materials, Soils and Foundations (Jap. Soc. of Soil Mech. and Foundation Eng.), 36(1), 13–26, 1996.; Bernaix, J.: New laboratory methods of studying the mechanical properties of Rocks, Int. J. Rock Mechanics and Mining Science, 6, 43–90, 1969.; Bertini, A. B L.: Hypo- und viskohypoplastische Modellierung von Kriech- und Rutschbewegungen, besonders infolge Starkbeben, Veröffentlichungen des Institute...
### Asymmetric Multifractal Model for Solar Wind Intermittent Turbulen...
##### By: W. M. MacEk; A. Szczepaniak
Description: Space Research Centre, Polish Academy of Science, Bartycka 18A, 00-716 Warsaw, Poland. We consider nonuniform energy transfer rate for solar wind turbulence depending on the solar cycle activity. To achieve this purpose we determine the generalized dimensions and singularity spectra for the experimental data of the solar wind measured in situ by Advanced Composition Explorer spacecraft during solar maximum (2001) and minimum (2006) at 1 AU. By determin...
Marsch, E. and Tu, C.-Y.: Intermittency, non-Gaussian statistics and fractal scaling of MHD fluctuations in the solar wind, Nonlin. Processes Geophys., 4, 101–124, 1997.; Marsch, E., Tu, C.-Y., and Rosenbauer, H.: Multifractal scaling of the kinetic energy flux in solar wind turbulence, Ann. Geophys., 14, 259–269, 1996.; Meneveau, C. and Sreenivasan, K. R.: Simple multifractal cascade model for fully developed turbulence, Phys. Rev. Lett., 59, 1424–1427, 1987.; Meneveau,...
### Observing Extreme Events in Incomplete State Spaces with Applicati...
##### By: K. P. Georgakakos; A. A. Tsonis
Description: Department of Mathematical Sciences, Atmospheric Sciences Group, University of Wisconsin-Milwaukee, Milwaukee, WI 53201-0413, USA. Reconstructing the dynamics of nonlinear systems from observations requires the complete knowledge of its state space. In most cases, this is either impossible or at best very difficult. Here, by using a toy model, we investigate the possibility of deriving useful insights about the variability of the system from only a part ...
### Toward Enhanced Understanding and Projections of Climate Extremes ...
##### By: A. Banerjee; D. Kumar; S. Chatterjee; A. Choudhary; E. A. Kodra; S. Chatterjee; W. Hendrix; S. Boriah; D. Das; R. Oglesby; K. Hayhoe; S. Ghosh; J. Kawale; D. Wuebbles; K. Steinhaeuser; D. Wang; K. Salvi; Q. Fu; R. Mawalagedara; S. Liess; A. R. Ganguly; J. Faghmous; P. Ganguli; C. Hays; V. Mithal; P. K. Snyder; V. Kumar
Description: Sustainability and Data Sciences Laboratory, Department of Civil and Environmental Engineering, Northeastern University, Boston, MA, USA. Extreme events such as heat waves, cold spells, floods, droughts, tropical cyclones, and tornadoes have potentially devastating impacts on natural and engineered systems, and human communities, worldwide. Stakeholder decisions about critical infrastructures, natural resources, emergency preparedness and humanitarian ai...
Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., Lee, G., Patterson, D., Rabkin, A., Stoica, I., and Zaharia, M.: A view of cloud computing, Commun. ACM, 53, 50–58, doi:10.1145/1721654.1721672, 2010.; Benjamini, Y. and Hochberg, Y.: Controlling the false discovery rate: a practical and powerful approach to multiple testing, J. R. Stat. Soc. Ser. B, 57, 289–300, 1995.; Berriman, G. B., Juve,...
### Multivariate Autoregressive Modelling of Sea Level Time Series fro...
##### By: M. J. Fernandes; M. E. Silva; S. M. Barbosa
Description: Department of Applied Mathematics, Faculty of Science, University of Porto, Portugal. This work addresses the autoregressive modelling of sea level time series from TOPEX/Poseidon satellite altimetry mission. Datasets from remote sensing applications are typically very large and correlated both in time and space. Multivariate analysis methods are useful tools to summarise and extract information from such large space-time datasets. Multivariate autoregre...
AVISO: AVISO User Handbook for Merged TOPEX/POSEIDON Products, AVI-NT-02-101-CN ed. 3.0, 1996.; Barbosa, S M., Fernandes, M J., and Silva, M E.: Space-time analysis of sea level in the North Atlantic from TOPEX/Poseidon satellite altimetry, International Association of Geodesy Symposia, 129, 248-253, Springer, 2005.; Berwin, R.: Topex/Poseidon Sea Surface Height Anomaly Product. User's Reference Manual, NASA JPL Physical Oceanography DAAC, Pasadena, CA., 2003.; Chambers,...
### Lagrangian Velocity Statistics of Directed Launch Strategies in a ...
##### By: A. C. Poje; M. Toner
Description: College of Marine Studies, University of Delaware, Newark, Delaware, USA. The spatial dependence of Lagrangian displacement and velocity statistics is studied in the context of a data assimilating numerical model of the Gulf Mexico. In the active eddy region of the Western Gulf, a combination of Eulerian and Lagrangian measures are used to locate strongly hyperbolic regions of the flow. The statistics of the velocity field sampled by sets of drifters la...
### Forecasting Characteristic Earthquakes in a Minimalist Model : Vol...
##### By: A. F. Pacheco; J. B. Gómez; M. Vázquez-prada; Á. González
Description: Departamento de Física Teórica and BIFI, Universidad de Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza, Spain. Using error diagrams, we quantify the forecasting of characteristic-earthquake occurrence in a recently introduced minimalist model. Initially we connect the earthquake alarm at a fixed time after the ocurrence of a characteristic event. The evaluation of this strategy leads to a one-dimensional numerical exploration of the loss function. This first str...
### On the Predictability of Ice Avalanches : Volume 12, Issue 6 (30/0...
##### By: C. Birrer; W. A. Stahel; M. Funk; A. Pralong
Description: Laboratory of Hydraulics, Hydrology and Glaciology, Swiss Federal Institute of Technology, 8092 Zürich, Switzerland. The velocity of unstable large ice masses from hanging glaciers increases as a power-law function of time prior to failure. This characteristic acceleration presents a finite-time singularity at the theoretical time of failure and can be used to forecast the time of glacier collapse. However, the non-linearity of the power-law function mak...
### Evolution of Magnetic Helicity Under Kinetic Magnetic Reconnection...
##### By: J. Büchner; T. Wiegelmann
Description: School of Mathematics and Statistics, University of St. Andrews, St. Andrews, KY16 9SS, United Kingdom. We investigate the evolution of magnetic helicity under kinetic magnetic reconnection in thin current sheets. We use Harris sheet equilibria and superimpose an external magnetic guide field. Consequently, the classical 2D magnetic neutral line becomes a field line here, causing a B ≠ 0 reconnection. While without a guide field, the ...
### Preface Nonlinear and Scaling Processes in Hydrology and Soil Scie...
##### By: Q. Cheng; J. L. M. P. De Lima; W. F. Krajewski; A. M. Tarquis; H. Gaonac'H
Biswas, A., Si, B. C., and Walley, F. L.: Spatial relationship between $\delta^15$N and elevation in agricultural landscapes, Nonlin. Processes Geophys., 15, 397–407, http://dx.doi.org/10.5194/npg-15-397-2008doi:10.5194/npg-15-397-2008, 2008.; Cheng, Q., Li, L., and Wang, L.: Characterization of peak flow events with local singularity method, Nonlin. Processes Geophys., 16, 503–513, http://dx.doi.org/10.5194/npg-16-503-2009doi:10.5194/npg-16-503-2009, 2009.; de Lima, M. ...
### New Types of Stable Nonlinear Whistler Waveguides : Volume 9, Issu...
##### By: Yu. A. Zaliznyak; T. A. Davydova; A. I. Yakimenko
Description: Plasma Theory Department, Institute for Nuclear Research, Nauki Ave. 47, Kiev 03680, Ukraine. The stationary self-focusing of whistler waves with frequencies near half of the electron-cyclotron frequency in the ionospheric plasma is considered in the framework of a two-dimensional generalized nonlinear Schrödinger equation including fourth-order dispersion effects and nonlinearity saturation. New types of soliton-like (with zero topologic...
1|2|3 Records: 1 - 20 of 45 - Pages: | 2021-04-20 01:05:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4073604345321655, "perplexity": 10938.496686951556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038921860.72/warc/CC-MAIN-20210419235235-20210420025235-00132.warc.gz"} |
http://www.gamedev.net/index.php?app=forums&module=extras§ion=postHistory&pid=5093064 | • Create Account
### #ActualSmjert
Posted 10 September 2013 - 12:51 PM
Yes transform.forward is normalized.
I retried to write it with the dot product Vector3.Dot(currentVelocityDir, transform.forward); and using that value to divide the X Netwons and.. it worked!
Or at least this is what it looks like even if, and i think this is due to rounding errors etc, i have a 0.001 increase of speed every second.
The dot between the velocity and the total force is not always 0 (for instance 7.723626E-08), but still a very small value and that's why there's that increase in speed i suppose.
And to keep velocity really constant i have to set it manually if > then expected one.
### #1Smjert
Posted 10 September 2013 - 12:50 PM
Yes transform.forward is normalized.
I retried to write it with the dot product Vector3.Dot(currentVelocityDir, transform.forward); and using that value to divide the X Netwons and.. it worked!
Or at least this is what it looks like even if, and i think this is due to rounding errors etc, i have a 0.001 increase of speed every second.
The dot between the velocity and the total force is not always 0 (for instance 7.723626E-08), but still a very small value so again i suppose that this is how it has to work.
And to keep velocity really constant i have to set it manually if > then expected one.
PARTNERS | 2014-12-20 11:59:00 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8282527327537537, "perplexity": 980.6883741185636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769709.84/warc/CC-MAIN-20141217075249-00139-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://ibmathsresources.com/ | IB Maths and GCSE Maths Resources from British International School Phuket. Theory of Knowledge (ToK). Maths explorations and investigations. Real life maths. Maths careers. Maths videos. Maths puzzles and Maths lesson resources.
British International School Phuket
Welcome to the British International School Phuket’s maths website. I am currently working at BISP and so I am running my site as the school’s maths resources website for both our students and students around the world.
We are a British international school located on the tropical island of Phuket in Southern Thailand. We offer a number of scholarships each year, catering for a number of national and international standard sports stars as well as for academic excellence. You can find out more about our school here.
BISP has a very proud tradition in mathematical excellence. Our students have achieved the top in Thailand awards for the Cambridge Maths IGCSEs 3 years in a row. Pictured above are our world-class maths students for this year, Minjin Kang and Natchongrat (Oy) Terdkiatkhachorn.
Please explore the site – there is a huge amount of content! Some of the most popular includes:
A large “Flipping the classroom” videos section for IB students. This covers the entire IB HL, SL and Studies syllabus.
A new School Code Challenge activity which allows students to practice their code breaking skills – each code hides the password needed to access the next level.
Over 200 ideas to help with students’ Maths Explorations – many with links to additional information to research.
Enjoy!
Are you Psychic?
There have been people claiming to have paranormal powers for thousands of years. However, scientifically we can say that as yet we still have no convincing proof that any paranormal abilities exist. We can show this using some mathematical tests – such as the binomial or normal distribution.
ESP Test
You can test your ESP powers on this site (our probabilities will be a little different than their ones). You have the chance to try and predict what card the computer has chosen. After repeating this trial 25 times you can find out if you possess psychic powers. As we are working with discrete data and have a fixed probability of guessing (0.2) then we can use a binomial distribution. Say I got 6 correct, do I have psychic powers?
We have the Binomial model B(25, 0.2), 25 trials and 0.2 probability of success. So we want to find the probability that I could achieve 6 or more by luck.
The probability of getting exactly 6 right is 0.16. Working out the probability of getting 6 or more correct would take a bit longer by hand (though could be simplified by doing 1 – P(x ≤ 5). Doing this, or using a calculator we find the probability is 0.38. Therefore we would expect someone to get 6 or more correct just by guessing 38% of the time.
So, using this model, when would we have evidence for potential ESP ability? Well, a minimum bar for our percentages would probably be 5%. So how many do you need to get correct before there is less than a 5% of that happening by chance?
Using our calculator we can do trial and error to see that the probability of getting 9 or more correct by guessing is only 4.7%. So, someone getting 9 correct might be showing some signs of ESP. If we asked for a higher % threshold (such as 1%) we would want to see someone get 11 correct.
Now, in the video above, one of the Numberphile mathematicians manages to toss 10 heads in a row. Again, we can ask ourselves if this is evidence of some extraordinary ability. We can calculate this probability as 0.510 = 0.001. This means that such an event would only happen 0.1% of the time. But, we’re only seeing a very small part of the total video. Here’s the full version:
Suddenly the feat looks less mathematically impressive (though still an impressive endurance feat!)
You can also test your psychic abilities with this video here.
Medical Data Mining
It’s worth watching the video above, where Derren Brown manages to flip 10 heads in a row. With Derren being a professional magician, you might expect some magic or sleight of hand – but no, it’s all filmed with a continuous camera, and no tricks. So, how does he achieve something which should only occur with probability (0.5)10 ≈ 0.001, or 1 time in every thousand? Understanding this trick is essential to understanding the dangers of accepting data presented to you without being aware of how it was generated.
At 7 minutes in Derren reveals the trick – it’s very easy, but also a very persuasive way to convince people something unusual is happening. The trick is that Derren has spent the best part of an entire day tossing coins – and only showed the sequence in which he achieved 10 heads in a row. Suddenly with this new information the result looks much less remarkable.
Scientific tests are normally performed to a 5% confidence interval – that is, if there is a less than 5% chance of something happening by chance then we regard the data as evidence to reject the null hypothesis and to accept the alternate hypothesis. In the case of the coin toss, we would if we didn’t know better, reject the null hypothesis that this is a fair coin and conjecture that Derren is somehow affecting the results.
Selectively presenting results from trials is called data mining - and it’s a very powerful way to manipulate data. Unfortunately it is also a widespread technique in the pharmaceutical industry when they release data on new drugs. Trials which show a positive effect are published, those which show no effect (or negative effects) are not. This is a massive problem – and one which has huge implications for people’s health. After all, we are prescribed drugs based on scientific trials which attest to their efficiency. If this data is being mined to skew results in the drug company’s favour then we may end up taking drugs that don’t work – or even make us worse.
Dr Ben Goldacre has written extensively on this topic – and an extract from his article “The Drugs Don’t Work” is well worth a read:
The Drugs Don’t Work
Reboxetine is a drug I have prescribed. Other drugs had done nothing for my patient, so we wanted to try something new. I’d read the trial data before I wrote the prescription, and found only well-designed, fair tests, with overwhelmingly positive results. Reboxetine was better than a placebo, and as good as any other antidepressant in head-to-head comparisons. It’s approved for use by the Medicines and Healthcare products Regulatory Agency (the MHRA), which governs all drugs in the UK. Millions of doses are prescribed every year, around the world. Reboxetine was clearly a safe and effective treatment. The patient and I discussed the evidence briefly, and agreed it was the right treatment to try next. I signed a prescription.
But we had both been misled. In October 2010, a group of researchers was finally able to bring together all the data that had ever been collected on reboxetine, both from trials that were published and from those that had never appeared in academic papers. When all this trial data was put together, it produced a shocking picture. Seven trials had been conducted comparing reboxetine against a placebo. Only one, conducted in 254 patients, had a neat, positive result, and that one was published in an academic journal, for doctors and researchers to read. But six more trials were conducted, in almost 10 times as many patients. All of them showed that reboxetine was no better than a dummy sugar pill. None of these trials was published. I had no idea they existed.
It got worse. The trials comparing reboxetine against other drugs showed exactly the same picture: three small studies, 507 patients in total, showed that reboxetine was just as good as any other drug. They were all published. But 1,657 patients’ worth of data was left unpublished, and this unpublished data showed that patients on reboxetine did worse than those on other drugs. If all this wasn’t bad enough, there was also the side-effects data. The drug looked fine in the trials that appeared in the academic literature; but when we saw the unpublished studies, it turned out that patients were more likely to have side-effects, more likely to drop out of taking the drug and more likely to withdraw from the trial because of side-effects, if they were taking reboxetine rather than one of its competitors.
The whole article is a fantastic (and worrying) account of regulatory failure. At the heart of this problem lies a social and political misunderstanding of statistics which is being manipulated by drug companies for profit. A proper regulatory framework would ensure that all trials were registered in advance and their data recorded. Instead what happens is trials are commissioned by drugs companies, published if they are favourable and quietly buried if they are not. This data mining would be mathematically rejected in an IB exploration coursework, yet these statistics still governs what pills doctors do and don’t prescribe.
When presented data therefore, your first question should be, “Where did this come from?” shortly followed by, “What about the data you’re not showing me?” Lies, damn lies and statistics indeed!
If you enjoyed this post, you might also like:
How contagious is Ebola? – how we can use differential equations to model the spread of the disease.
Tetrahedral Numbers – Stacking Cannonballs
This is one of those deceptively simple topics which actually contains a lot of mathematics – and it involves how spheres can be stacked, and how they can be stacked most efficiently. Starting off with the basics we can explore the sequence:
1, 4, 10, 20, 35, 56….
These are the total number of cannons in a stack as the stack gets higher. From the diagram we can see that this sequence is in fact a sum of the triangular numbers:
S1 = 1
S2 1+3
S3 1+3+6
S4 1+3+6+10
So we can sum the first n triangular numbers to get the general term of the tetrahedral numbers. Now, the general term of the triangular numbers is 0.5n2 + 0.5n therefore we can think of tetrahedral numbers as the summation:
$\bf \sum_{k=1}^{n}0.5k+0.5k^2 = \sum_{k=1}^{n}0.5k+\sum_{k=1}^{n}0.5k^2$
But we have known results for the 2 summations on the right hand side:
$\bf \sum_{k=1}^{n}0.5k =\frac{n(n+1)}{4}$
and
$\bf \huge \sum_{k=1}^{n}0.5k^2 = \frac{n(n+1)(2n+1)}{12}$
and when we add these two together (with a bit of algebraic manipulation!) we get:
$\bf S_n= \frac{n(n+1)(n+2)}{6}$
This is the general formula for the total number of cannonballs in a stack n rows high. We can notice that this is also the same as the binomial coefficient:
$\bf S_n={n+2\choose3}$
Therefore we also can find the tetrahedral numbers in Pascals’ triangle (4th diagonal column above).
The classic maths puzzle (called the cannonball problem), which asks which tetrahedral number is also a square number was proved in 1878. It turns out there are only 3 possible answers. The first square number (1) is also a tetrahedral number, as is the second square number (4), as is the 140th square number (19,600).
We can also look at something called the generating function of the sequence. This is a polynomial whose coefficients give the sequence terms. In this case the generating function is:
$\bf \frac{x}{(x-1)^4} = x + 4x^2 + 10x^3 + 20x^4 ...$
Having looked at some of the basic ideas behind the maths of stacking spheres we can look at a much more complicated mathematical problem. This is called Kepler’s Conjecture – and was posed 400 years ago. Kepler was a 17th century mathematician who in 1611 conjectured that there was no way to pack spheres to make better use of the given space than the stack above. The spheres pictured above fill about 74% of the given space. This was thought to be intuitively true – but unproven. It was chosen by Hilbert in the 18th century as one of his famous 23 unsolved problems. Despite much mathematical efforts it was only finally proved in 1998.
If you like this post you might also like:
The Poincare Conjecture – the search for a solution to one of mathematics greatest problems.
Hailstone Numbers
This is a post inspired by the article on the same topic by the ever brilliant Plus Maths. Hailstone numbers are created by the following rules:
if n is even: divide by 2
if n is odd: times by 3 and add 1
We can then generate a sequence from any starting number. For example, starting with 10:
10, 5, 16, 8, 4, 2, 1, 4, 2, 1…
we can see that this sequence loops into an infinitely repeating 4,2,1 sequence. Trying another number, say 58:
58, 29, 88, 44, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1, 4, 2, 1…
and we see the same loop of 4,2,1.
In fact we can use the generator in the Plus Maths article to check any numbers we can think of, and we still get the pattern 4,2,1 looping. The question is, does every number end in this loop? Well, we don’t know. Every number mathematicians have checked do indeed lead to this loop, but that is not a proof. Perhaps there is a counter-example, we just haven’t found it yet.
Hailstone numbers are called as such because they fall, reach one (the ground) before bouncing up again. The proper mathematical name for this investigation is the Collatz conjecture. This was made in 1937 by a German mathematian, Lothar Collatz.
One way to investigate this conjecture is to look at the length of time it takes a number to reach the number 1. Some numbers take longer than others. If we could find a number that didn’t reach 1 even in an infinite length of time then the Collatz conjecture would be false.
The following graphic from wikipedia shows how different numbers (x axis) take a different number of iterations (y axis) to reach 1. We can see that some numbers take much longer than others to reach one. For example, the number 73 has the following pattern:
73, 220, 110, 55, 166, 83, 250, 125, 376, 188, 94, 47, 142, 71, 214, 107, 322, 161, 484, 242, 121, 364, 182, 91, 274, 137, 412, 206, 103, 310, 155, 466, 233, 700, 350, 175, 526, 263, 790, 395, 1186, 593, 1780, 890, 445, 1336, 668, 334, 167, 502, 251, 754, 377, 1132, 566, 283, 850, 425, 1276, 638, 319, 958, 479, 1438, 719, 2158, 1079, 3238, 1619, 4858, 2429, 7288, 3644, 1822, 911, 2734, 1367, 4102, 2051, 6154, 3077, 9232, 4616, 2308, 1154, 577, 1732, 866, 433, 1300, 650, 325, 976, 488, 244, 122, 61, 184, 92, 46, 23, 70, 35, 106, 53, 160, 80, 40, 20, 10, 5, 16, 8, 4, 2, 1…
so investigating what it is about certain numbers that leads to long chains is one possible approach to solving the conjecture. This conjecture has been checked by computers up to a staggering 5.8 x 1018 numbers. That would suggest that the conjecture could be true – but doesn’t prove it is. Despite looking deceptively simple, Paul Erdos – one of the great 20th century mathematicians stated in the 1980s that “mathematics is not yet ready for such problems” – and it has remained unsolved over the past few decades. Maybe you could be the one to crack this problem!
If you liked this post you might also like:
Friendly Numbers, Solitary Numbers, Perfect Numbers – a look at some other number sequence problems.
Stellar Numbers Investigation
This is an old IB internal assessment question and so can not be used for the new IB exploration – however it does give a good example of the sort of pattern investigation that is possible.
The task starts off with the fairly straightforward problem of trying to find the nth term formula for the triangular numbers:
Method 1
There are a number of ways to do this, probably the easiest is to notice that the second differences are always constant (+1 each time). Therefore we have a quadratic sequence in the form an2 + bn + c
We can now substitute the known values when n = 1, 2, 3 into this to find 3 equations:
a(1) + b(1) + c = 1
a(2)2 + b(2) + c = 3
a(3)2 + b(3) + c = 6
this gives us:
a + b + c = 1
4a + 2b + c = 3
9a + 3b + c = 6
We can then eliminate using simultaneous equations to find a, b, c. In fact our job is made easier by knowing that if the second difference is a constant, then the a in our formula will be half that value. Therefore as our second difference was 1, the value of a will be 1/2. We then find that b = 1/2 and c = 0. So the formula for the triangular numbers is:
0.5n2 + 0.5n
Method 2
We can also derive this formula by breaking down triangular numbers into the following series:
1
1+2
1+2+3
1+2+3+4
Therefore we have the sum of an arithmetic series, with first term 1, common difference 1 and last term n, and so we can use the sum of an arithmetic series formula:
Sn = 0.5n(a1 + an)
Sn = 0.5n(1 + n) = 0.5n2 + 0.5n
Once this is done, we are asked to find the nth term for the 6-stellar numbers (with 6 vertices) below:
which give the pattern 1, 13, 37, 73
Method 1
Once again we can use the method for quadratic sequences. The second difference is 12, giving us an2 + bn + c with a = 12/2 = 6. Substituting values gives us:
1 = 6(1)2 + b(1) + c
13 = 6(2)2 + b(2) + c
This simplifies to:
1 = 6 + b + c
13 = 24 + 2b + c
Therefore we can eliminate to find that b = -6 and c = 1.
which gives 6n2 – 6n + 1
Method 2
A more interesting method makes use of the triangular numbers. We can first note a recurrence relationship in the stellar numbers – each subsequent pattern contains all the previous patterns inside. In fact we can state the relationship as:
S1
S2 = S1 + outside star edge
S3 = S2 + outside star edge
S4 = S3 + outside star edge
The outside star edge of S2 can be thought of as 6 copies of the 2nd triangular number
The outside star edge of S3 can be thought of as 6 copies of the 3rd triangular number, but where we subtract 6×1 (the first triangular number) because we double count one of the internal points six times. We also subtract 6 as we double count each vertex.
The outside star edge of S4 can be thought of as 6 copies of the 4th triangular number, but where we subtract 6 x 3 (the second triangular number) because we double count three of the internal points six times. We also subtract 6 as we double count each vertex.
The outside star edge of S5 can be thought of as 6 copies of the 5th triangular number, but where we subtract 6 x 6 (the third triangular number) because we double count six of the internal points six times. We also subtract 6 as we double count each vertex.
Therefore we can form a formula for the outside star:
6(0.5n2 + 0.5n) – 6(0.5(n-2)2 + 0.5(n-2)) – 6
which simplifies to:
12(n -1)
We can now put this into our recurrence relationship:
S1 = 1
S2 = 1 + 12(n -1)
S3 = 1 + 12((n-1) -1) + 12(n -1)
S4 = 1 + 12((n-2) -1) + 12((n-1) -1) + 12(n -1)
Note that when we substituted the nth term formula for S2 into S3 we had to shift the n value to become n-1 as we were now on the 3rd term rather than 2nd term.
So:
S1 = 1
S2 = 1 + 12(n -1)
S3 = 1 + 12(n-1) + 12(n-2)
S4 = 1 + 12(n-1) + 12(n-2) + 12(n-3)
So:
S1 = 1 + 0
S2 = 1 + 12
S3 = 1 + 12+ 24
S4 = 1 + 12 + 24 + 36
So using the formula for the sum of an arithmetic Sn = 0.5n(a1 + an) we have
Sn = 1 + 0.5(n-1)(12 + 12(n-1))
Sn = 6n2 – 6n + 1
Quite a bit more convoluted – but also more interesting, and also more clearly demonstrating how the sequence is generated.
Generalising for p-stellar numbers
We can then generalise to find stellar number formulae for different numbers of vertices. For example the 5-stellar numbers pictured above have the formula 5n2 – 5n + 1. In fact the p-stellar numbers will have the formula pn2 – pn + 1.
We can prove this by using the same recurrence relationship before:
S1
S2 = S1 + outside star edge
S3 = S2 + outside star edge
S4 = S3 + outside star edge
and by noting that the outside star edge is found in the same way as before for a p-stellar shape – only this time we subtract p for the number of vertices counted twice. This gives:
p(0.5n2 + 0.5n) – p(0.5(n-2)2 + 0.5(n-2)) – p
which simplifies to
2p(n-1)
and so substituting this into our recurrence formula:
S1 = 1
S2 = 1 + 2p(n-2)
S3 = 1 + 2p(n-2) + 2p(n-1)
S4 = 1 + 2p(n-3) + 2p(n-2) + 2p(n-1)
We have the same pattern as before – an arithmetic series in terms of 2p, and using Sn = 0.5n(a1 + an) we have:
Sn= 1 + 0.5(n-1)(2p + 2p(n-1) )
Sn = pn2 – pn + 1
Therefore, although our second method was slower, it allowed us to spot the pattern in the progression – and this then led very quickly to a general formula for the p-stellar numbers.
If you like this you might also like:
The Goldbach Conjecture – The Goldbach Conjecture states that every even integer greater than 2 can be expressed as the sum of 2 primes. No one has ever managed to prove this.
Making Music With Sine Waves
Sine and cosine waves are incredibly important for understanding all sorts of waves in physics. Musical notes can be thought of in terms of sine curves where we have the basic formula:
y = sin(bt)
where t is measured in seconds. b is then connected to the period of the function by the formula period = 2π/b.
When modeling sound waves we normally work in Hertz – where Hertz just means full cycles (periods) per second. This is also called the frequency. Sine waves with different Hertz values will each have a distinct sound – so we can cycle through scales in music through sine waves of different periods.
For example the sine wave for 20Hz is:
20Hz means 20 periods per second (i.e 1 period per 1/20 second) so we can find the equivalent sine wave by using
period = 2π/b.
1/20 = 2π/b.
b = 40π
So, 20Hz is modeled by y = sin(40πt)
You can plot this graph using Wolfram Alpha, and then play the sound file to hear what 20Hz sounds like. 20Hz is regarded as the lower range of hearing spectrum for adults – and is a very low bass sound.
The middle C on a piano is modeled with a wave of 261.626Hz. This gives the wave
which has the equation, y = sin(1643.84πt). Again you can listen to this sound file on Wolfram Alpha.
At the top end of the sound spectrum for adults is around 16,000 – 20,000Hz. Babies have a ability to hear higher pitched sounds, and we gradually lose this higher range with age. This is the sine wave for 20,000Hz:
which has the equation, y = sin(40,000πt). See if you can hear this file - warning it’s a bit painful!
As well as sound waves, the whole of the electromagnetic spectrum (radio waves, microwaves, infrared, visible light, ultraviolet, x rays and gamma rays) can also be thought of in terms of waves of different frequencies. So, modelling waves using trig graphs is an essential part of understanding the physical world.
If you enjoyed this post you might also like:
Fourier Transforms – the most important tool in mathematics? - how we can use advanced mathematics to understand waves – with applications for everything from WIFI, JPEG compression, DNA analysis and MRI scans.
Surviving the Zombie Apocalypse
This is part 2 in the maths behind zombies series. See part 1 here
We have previously looked at how the paper from mathematicians from Ottawa University discuss the mathematics behind surviving the zombie apocalypse – and how the mathematics used has many other modelling applications – for understanding the spread of disease and the diffusion of gases. In the previous post we saw how the zombie diffusion rate could be predicted by the formula:
In this equation Z(x,t) stands for the density of zombies at point x and time t. Z0 stands for the initial zombie density – where all zombies are starting at the same point (x between 0 and 1). L stands for the edge of the domain. This is a 1 dimensional model – where zombies only travel in a straight line. For modelling purposes, this would be somewhat equivalent to being trapped in a 50 metre by 1 metre square fenced area – with (0,0) as the bottom left corner of the fence. L would be 50 in this case, and all zombies would initially be in the 1 metre square which went through the origin.
We saw that as the time, t gets large this equation can be approximated by:
Which means that after a long length of time our 50 metre square fenced area will have an equal density of zombies throughout. If we started with 100 zombies in our initial 1 metre square area (say emerging from a tomb), then Z0 = 100 and with L = 50 we would have an average density of 100/2 = 2 zombies per metre squared.
When will the zombies arrive?
So, say you have taken the previous post’s advice and run as far away as possible. So, you’re at the edge of the 50 metre long fence. The next question to ask therefore, how long before the zombies reach you? To answer this we need to solve the initial equation Z(x,t) to find t when x = 50 and Z(50,t) = 1. We solve to find Z(50,t) = 1 because this represents the time t when there is a density of 1 zombie at distance 50 metres from the origin. In other words when a zombie is standing where you are now! Solving this would be pretty tough, so we do what mathematicians like to do, and take an approximation. This approximate solution for t is given by:
where L is the distance we’re standing away (50 metres in this case) and D is the diffusion rate. D can be altered to affect the speed of the zombies. In the study they set D as 100 – which is claimed to be consistent with a slow, shuffling zombie walk. Therefore the time the zombies will take to arrive is approximately t = 0.32(50)2/100 = 8 minutes. If we are a slightly further distance away (say we are trapped along a 100 metre fence) then the zombies will arrive in approximately t = 0.32(100)2/100 = 32 minutes.
Fight or flight?
Fighting (say by lobbing missiles at the oncoming hordes) would slow the diffusion rate D, but would probably be less effective than running – as the time is rapidly increased by the L2 factor. Let’s look at a scenario to compare:
You are 20 metres from the zombies. You can decide to spend 1 minute running an extra 30 metres away (you’re not in good shape) to the edge of the fence (no rocks here) or can spend your time lobbing rocks with your home-made catapult to slow the advance. Which scenario makes more sense?
Scenario 1
You get to the edge of the fence in 1 minute. The zombies will get to the edge of the fence in t = 0.32(50)2/100 = 8 minutes. You therefore have an additional 7 minutes to sit down, relax, and enjoy your last few moments before the zombies arrive.
Scenario 2
You successfully manage to slow the diffusion rate to D = 50 as the zombies are slowed by your sharp-shooting. The zombies will arrive in 0.32(20)2/50 = 2.6 minutes. If only you’d paid more attention in maths class.
If you liked this post you might also like:
How contagious is Ebola? – using differential equations to model infections.
Modelling for Zombies
Some mathematicians at the University of Ottawa have just released a paper looking at the mathematics behind a zombie apocalypse. What are the best strategies for avoiding being eaten? How quickly would zombies spread through the population? This may seem a little silly as zombies aren’t real – but actually the mathematics behind how diseases spread through a population is very useful – and, well, zombies are as good a way as any to introduce this.
The graphic above from the paper shows how zombie movement can be modelled. Given that zombies randomly move around, and any bumping would lead to a tendency towards finding space, they are modelled in the same way that we model the diffusion of gas. If you start with a small concentrated number of particles they will spread out to fill the given space like shown above.
Diffusion can be modelled by the diffusion equation above. We have:
t: time (in specified units)
x: position of the x axis.
w: the density of zombies at time t and point x. We could also write w(t,x) in function notation.
a: a is a constant.
The “curly d” in the equation means the partial differential. This works the same as normal differentiation but when we differentiate we are only interested in differentiating the denominator letter – and act as though all other letters are constants. This is easier to show with an example.
z = 3xy2
The partial differential of z with respect to x is 3y2
The partial differential of z with respect to y is 6xy
So, going back to our diffusion equation, we need to find a function w(x,t) which satisfies this equation – and then we can use this function to model the spread of zombies through an area. There are lots of different solutions to this equation (see a list here). One of the easiest is:
w(x,t) = A(x2 + 2at) + B
where we have introduced 2 new constants, A and B.
We can check that this works by finding the left handside and right handside of the diffusion equation:
Therefore as the LHS and RHS are equal, the diffusion equation is satisfied. Therefore we have the following zombie density model:
w(x,t) = A(x2 + 2at) + B
this will tell us at point x and time t what the zombie density is. We would need particular values to then find A, B and a. For example, we can restrict x between 0 and 1 and t between 1 and 5, then set A = -1, B = 21, a = 2 to give:
w(x,t) = (-x2 + -4t) + 21
This begins to fit the behavior we want – at any fixed point x the density will decrease with time, and as we move further away from the initial point (x = 0) we have lower density. This is only very rough however.
A more complicated solution to the diffusion equation is given above. In this equation Z(x,t) stands for the density of zombies at point x and time t. Z0 stands for the initial zombie density – where all zombies are starting at the same point (x between 0 and 1). L stands for the edge of the domain. This is a 1 dimensional model – where zombies only travel in a straight line. For modelling purposes, this would be somewhat equivalent to being trapped in a 50 metre by 1 metre square fenced area – with (0,0) as the bottom left corner of the fence. L would be 50 in this case, and all zombies would initially be in the 1 metre square which went through the origin.
Luckily as t gets large this equation can be approximated by:Which means that after a long length of time our 50 metre square fenced area will have an equal density of zombies throughout. If we started with 100 zombies in our initial 1 metre square area (say emerging from a tomb), then with Z0 = 100 and with L = 50 we would have an average density of 100/2 = 2 zombies per metre squared. In other words zombies would be evenly spaced out across all available space.
So, what advice can you take from this when faced with a zombie apocalypse? Well if zombies move according to diffusion principles then initially you have a good advantage to outrun them – after-all they will be moving randomly and you will be running linearly as far away as possible. That will give you some time to prepare your defences for when the zombies finally reach you. As long as you get far enough away, when they do reach your corner their density will be low and therefore much easier to fight.
Good luck!
If you liked this post you might also like:
Surviving the Zombie Apocalypse – more zombie maths. How long before the zombies arrive?
How contagious is Ebola? – using differential equations to model infections.
Maths Puzzles
These should all be accessible for top sets in KS4 and post 16. See if you can manage to get all 3 correct.
Puzzle Number 1
Why is xx undefined when x = 0 ?
Puzzle Number 2
I multiply 3 consecutive integers together. My answer is 8 times the smallest of the 3 integers I multiplied. What 3 numbers could I have chosen?
Puzzle Number 3
You play a game as follows:
1 point for a prime number
2 points for an even number
-3 points for a square number
(note if you choose (say) the number 2 you get +1 for being a prime and +2 for being an even number giving a total of 3 points)
You have the numbers 1-9 to choose from. You need to choose 6 numbers such that their score adds to zero. How many different ways can you find to win this game?
When you have solved all 3 puzzles, click here to find out the solutions.
If you like this post, you might also like:
A Maths Snooker Puzzle. A great little puzzle which tests logic skills.
Visualising Algebra Through Geometry. How to use geometry to simplify puzzles
The Chinese Postman Problem
There is a fantastic pdf resource from Suffolk Maths which goes into a lot of detail on this topic – and I will base my post on their resource. Visit their site for a more in-depth treatment.
The Chinese Postman Problem was first posed by a Chinese mathematician in 1962. It involved trying to calculate how a postman could best choose his route so as to mimise his time. This is the problem that Kuan Mei-Ko tried to solve:
How could a postman visit every letter on the graph in the shortest possible time?
Solving this requires using a branch of mathematics called graph theory, created by Leonard Euler. This mathematics looks to reduce problems to network graphs like that shown above. Before we can solve this we need to understand some terminology:
Above we have 3 graphs. A graph which can be drawn without taking the pen off the paper or retracing any steps is called traversable (and has a Euler trail). Graph 1 is not traversable. Graph 2 is traversable as long as you start at either A or D, and Graph 3 is traversable from any point that you start. It turns out that what dictates whether a graph is traversable or not is the order of their vertices.
Looking at each letter we count the number of lines at the vertex. This is the order. For graph 1 we have 3 lines from A so A has an order of 3. All the vertices on graph 1 have an order of 3. For graph 2 we have the orders (from A, B, C, D, E in turn) 3, 4, 4, 3, 2. For graph 3 we have the orders 4,4,4,4,2,2.
This allows us to arrive at a rule for working out if a graph is traversable.
If all orders are even then a graph is traversable. If there are 2 odd vertices then we can find a traversable graph by starting at one of the odd vertices and finishing at the other. We need therefore to pair up any odd vertices on the graph.
Next we need to understand how to pair the odd vertices. For example if I have 2 odd vertices, they can only be paired in one way. If I have 4 vertices (say A,B,C,D) they can be paired in 3 different ways (either AB and CD or AC and BD or AD and BC) . The general term rule to calculate how many ways n odd vertices can be paired up is (n-1) x (n-3) x (n-5) … x 1.
So now we are ready to actually solve the Chinese Postman Problem. Here is the algorithm:
So, we can now apply this to the Chinese Postman Problem below:
Step 1: We can see that the only odd vertices are A and H.
Step 2: We can only pair these one way (AH)
Step 3 and 4: The shortest way to get from A to H is ABFH which is length 160. This is shown below:
Step 5 and 6: The total distance along all the lines in the initial diagram is 840m. We add our figure of 160m to this. Therefore the optimum (minimum) distance it is possible to complete the route is 1000m.
Step 7: We now need to find a route of distance 1000m which includes the loop ABFH (or in reverse) which starts and finishes at one of the odd vertices. One solution provided by Suffolk Maths is ADCGHCABDFBEFHFBA. There are others!
The Bridges of Konigsburg
Graph theory was invented as a method to solve the Bridges of Konigsburg problem by Leonard Euler. This was a puzzle from the 17oos – Konigsburg was a Russian city with 7 bridges, and the question was, could anyone walk across all 7 without walking over any bridge twice. By simplifying the problem into one of connected lines, Euler was able to prove that this was in fact impossible.
If you like this post you might also like:
Knight’s Tour – This puzzles dates over 1000 years and concerns the ways in which a knight can cover all squares on a chess board.
Game Theory and Tic Tac Toe – Tic Tac Toe has already been solved using Game Theory – this topic also brings in an introduction to Group Theory.
Website Stats
• 1,275,896 views | 2015-03-03 04:39:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.618483304977417, "perplexity": 834.5409973893015}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463108.89/warc/CC-MAIN-20150226074103-00030-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://instrain.readthedocs.io/en/latest/overview.html | # Glossary & FAQ¶
## Glossary of terms used in inStrain¶
Note
This glossary is meant to give a conceptual overview of the terms used in inStrain. See Expected output for explanations of specific output data.
ANI
Average nucleotide identity. The average nucleotide distance between two genomes or .fasta files. If two genomes have a difference every 100 base-pairs, the ANI would be 99%
conANI
Consensus ANI - average nucleotide identity values calculated based on consensus sequences. This is commonly reported as “ANI” in other programs. Each position on the genome is represented by the most common allele (also referred to as the consensus allele), and minor alleles are ignored.
popANI
Population ANI - a new term to describe a unique type of ANI calculation performed by inStrain that considers both major and minor alleles. If two populations share any alleles at a loci, including minor alleles, it does not count as a difference when calculating popANI. It’s easiest to describe with an example: consider a genomic position where the reference sequence is ‘A’ and 100 reads are mapped to the position. Of the 100 mapped reads, 60 have a ‘C’ and 40 have an ‘A’ at this position. In this example the reads share a minor allele with the reference genome at the position, but the consensus allele (most common allele) is different. Thus, this position would count as a difference in conANI calculations (because the consensus alleles are different) and would not count as a difference in popANI calculations (because the reference sequence is present as an allele in the reads). See Important concepts for examples.
Representative genomes (RGs) are genomes that are used to represent some taxa. For example you could have a series of representative genomes to represent each clade of E. coli (one genome for each clade), or you could have one representative genome for the entire species of E. coli (in that case it would be a Species Representative Genome (SRG)). The base unit of inStrain-based analysis is the representative genome, and they are usually generated using the program dRep
Species representative genome
A Species Representative Genome (SRG) is a representative genome that is used to represent an entire single microbial species.
Genome database
A collection of representative genomes that are mapped to simultaneously (competitive mapping).
nucleotide diversity
A measurement of genetic diversity in a population (microdiversity). We measure nucleotide diversity using the method from Nei and Li 1979 (often referred to as ‘pi’ π in the population genetics world). InStrain calculates nucleotide diversity at every position along the genome, based on all reads, and averages values across genes / genomes. This metric is influenced by sequencing error, but within study error rates should be consistent and this effect is often minor compared to the extent of biological variation observed within samples. This metric is nice because it is not affected by coverage. The formula for calculating nucleotide diversity is the sum of the frequency of each base squared: 1 - [(frequency of A)^2 + (frequency of C)^2 + (frequency of G)^2 + (frequency of T)^2 ].
microdiversity
We use the term microdiversity to refer to intraspecific genetic variation, i.e. the genetic variation between cells within a microbial species.
clonality
The opposite of nucleotide diversity (1 - nucleotide diversity). A deprecated term used in older versions of the program.
SNV
Single nucleotide variant. A single nucleotide change that is present in a faction of a population. Can also be described as a genomic loci with multiple alleles present. We identify and call SNVs using a simple model to distinguish them from errors, and more importantly in our experience, careful read mapping and filtering of paired reads to be assured that the variants (and the reads that contain them) are truly from the species being profiled, and not from another species in the metagenome (we call it ‘mismapping’ when this happens). Note that a SNV refers to genetic variation within a read set.
SNS
Single nucleotide substitution. A single nucleotide change that has a fixed difference between two populations. If the reference genome has a ‘A’ at some position, but all of the reads have a ‘C’ at that position, that would be a SNS (if half of the reads have an ‘A’ and half of the reads have a ‘C’, that would be an SNV).
divergent site
A position in the genome where either an SNV or SNS is present.
SNP
Single Nucleotide Polymorphism. In our experience this term means different things to different people, so we have tried to avoid using it entirely (instead referring to SNSs, SNVs, and divert sites).
A measure of how likely two divergent sites are to be inherited together. If two alleles are present on the same read, they are said to be “linked”, meaning that they are found together on the same genome. Loci are said to be in “linkage disequilibrium” when the frequency of association of their different alleles is higher or lower than what would be expected if the loci were independent and associated randomly. In the context of microbial population genetics, linkage decay is often used as a way to detect recombination among members of a microbial population. InStrain uses the metrics r2 (r squared) and D’ (D prime) to measure linkage.
coverage
A measure of sequencing depth. We calculate coverage as the average number of reads mapping to a region. If half the bases in a scaffold have 5 reads on them, and the other half have 10 reads, the coverage of the scaffold will be 7.5
A measure of how much of a region is covered by sequencing reads. Breadth is an important concept that is distinct from sequencing coverage, and gives you an approximation of how well the reference sequence you’re using is represented by the reads. Calculated as the percentage of bases in a region that are covered by at least a single read. A breadth of 1 means that all bases in a region have at least one read covering them
relative abundance
The percentage of total reads that map a particular entity. If a metagenome has 1,000,000 reads and 1,000 reads to a particular genome, that genome is at 0.1% relative abundance
contig
A contiguous sequence of DNA. Usually used as a reference sequence for mapping reads against. The terms contig and scaffold are used interchangeably by inStrain.
scaffold
A sequence of DNA that may have a string of “N”s in it representing a gap of unknown length. The terms contig and scaffold are used interchangeably by inStrain.
iRep
A measure of how fast a population was replicating at the time of DNA extraction. Based on comparing the sequencing coverage at the origin vs. terminus of replication, as described in Brown et. al., Nature Biotechnology 2016
mutation type
Describes the impact of a nucleotide mutation on the amino acid sequence of the resulting protein. N = non-synonymous mutation (the encoded amino-acid changes due to the mutation). S = synonymous mutation (the encoded amino-acid does not change due to the mutation; should happen ~1/6 of the time by random chance due to codon redundancy). I = intergenic mutation. M = multi-allelic SNV with more than one change (rare).
dN/dS
A measure of whether the set of mutations in a gene are biased towards synonymous (S) or non-synonymous (N) mutations. dN/dS is calculated based on mutations relative to the reference genome. dN/dS > 1 means the bias is towards N mutations, indicating the gene is under active selection to mutate. dN/dS < 1 means the bias is towards S mutations, indicated the gene is under stabilizing selection to not mutate. dN/dS = 1 means that N and S mutations are at the rate expected by mutating positions randomly, potentially indicating the gene is non-functional.
pN/pS
Very similar to dN/dS, but calculated at positions with at least two alleles present rather than in relation to the reference genome.
fasta file
A file containing a DNA sequence. Details on this file format can be found on wikipedia
bam file
A file containing metagenomic reads mapped to a DNA sequence. Very similar to a .sam file. Details can be found online
scaffold-to-bin file
A .text file with two columns separated by tabs, where the first column is the name of a scaffold and the second column is the name of the bin / genome the scaffold belongs to. Can be created using the script parse_stb.py that comes with the program dRep See Expected output for more info
genes file
A file containing the nucleotide sequences of all genes to profile, as called by the program Prodigal. See Expected output for more info
A read that is erroneously mapped to a genome. InStrain profiles a population by looking at the reads mapped to a genome. These reads are short, and sometimes reads that originated from one microbial population map to the representative genome of another (for example if they share homology). There are several techniques that can be used to reduce mismapping to the lowest extent possible.
A read that maps equally well to multiple different locations in the .fasta file. Most mapping software will randomly select one position to place multi-mapped reads. There are several techniques that can be used to reduce multi-mapped reads to the lowest extent possible, including increasing the minimum MAPQ cutoff to >2 (which will eliminate them entirely).
inStrain profile
An inStrain profile (aka IS_profile, IS, ISP) is created by running the inStrain profile command. It contains all of the program’s internal workings, cached data, and is where the output is stored. Additional commands can then be run on an IS_profile, for example to analyze genes, compare profiles, etc., and there is lots of nice cached data stored in it that can be accessed using python.
null model
The null model describes the probability that the number of true reads that support a variant base could be due to random mutation error, assuming Q30 score. The default false discovery rate with the null model is 1e-6 (one in a million).
mm
The maximum number of mismatches a read-pair can have to be considered in the metric being considered. Behind the scenes, inStrain actually calculates pretty much all metrics for every read pair mismatch level. That is, only including read pairs with 0 mismatches to the reference sequences, only including read pairs with >= 1 mis-match to the reference sequences, all the way up to the number of mismatches associated with the “PID” parameter. Most of the time when it then generates user-facing output, it uses the highest mm possible and deletes the column label. If you’d like access to information on the mm-level, see the section titled “Dealing with mm”
mapQ score
MapQ scores are a measure of how well a read aligns to a particular position. They are assigned to each read mapped by bowtie2, but the details of how they are generated are incredibly confusing (see the following link for more information). MapQ scores of 0 and 1 have a special meaning: if a read maps equally well to multiple different locations on a .fasta file, it always gets a MapQ score of 0 or 1.
### How does inStrain compare to other bioinformatics tools for strains analysis?¶
A major difference is inStrain’s use of the popANI and conANI, which allow consideration of minor alleles when performing genomic comparisons. See Important concepts for more information.
### What can inStrain do?¶
inStrain includes calculation of nucleotide diversity, calling SNPs (including non-synonymous and synonymous variants), reporting accurate coverage / breadth, and calculating linkage disequilibrium in the contexts of genomes, contigs, and individual genes.
inStrain also includes comparing the frequencies of fixed and segregating variants between sequenced populations with extremely high accuracy, out-performing other popular strain-resolved metagenomics programs.
The typical use-case is to generate a .bam file by mapping metagenomic reads to a bacterial genome that is present in the metagenomic sample, and using inStrain to characterize the microdiversity present.
Another common use-case is detailed strain comparisons that involve comparing the genetic diversity of two populations and calculating the extent to which they overlap. This allows for the calculation of population ANI values for extremely similar genomic populations (>99.999% average nucleotide identity).
Installation
To get started using the program
Expected output
To view example output
User Manual
For information on how to prepare data for inStrain and run inStrain
Important concepts
For detailed information on how to make sure inStrain is running correctly
### How does inStrain work?¶
The reasoning behind inStrain is that every sequencing read is derived from a single DNA molecule (and thus a single cell) in the original population of a given microbial species. During assembly, the consensus of these reads are assembled into contigs and these contigs are binned into genomes - but by returning to assess the variation in the reads that assembled into the contigs, we can characterize the genetic diversity of the population that contributed to the contigs and genomes.
The basic steps:
1. Map reads to a .fasta file to create a .bam file
3. Calculate nucleotide diversity and SNVs
5. Optional: calculate gene statistics and SNV function
6. Optional: compare SNVs between samples.
### What is unique about the way that inStrain compares strains?¶
Most strain-resolved pipelines compare the dominant allele at each position. If you have two closely related strains A and B in sample 1, with B being at higher abundance, and two closely related strains A and C in sample 2, with C being at higher abundance, most strain comparison pipelines will in actuality compare strain B and C. This is because they work on the principle of finding the dominant strain in each sample and then comparing the dominant strains. InStrain, on the other hand, is able to identify the fact that A is present in both samples. This is because it doesn’t just compare the dominant alleles, but compares all alleles in the two populations. See module_descriptions and choosing_parameters for more information.
### What is a population?¶
To characterize intra-population genetic diversity, it stands to reason that you first require an adequate definition of “population”. InStrain relies mainly on population definitions that are largely technically limited, but also coincide conveniently with possibly biological real microbial population constraints (see Olm et. al. mSystems 2020 and Jain et. al. Nature Communications 2018). Often, we dereplicate genomes from an environment at average nucleotide identities (ANI) from 95% to 99%, depending on the heterogeneity expected within each sample - lower ANIs might be preferred with more complex samples. We then assign reads to each genome’s population by stringently requiring that combined read pairs for SNP calling be properly mapped pairs with an similarity to the consensus of at least 95% by default, so that the cell that the read pair came from was at least 95% similar to the average consensus genotype at that position. Within an environment, inStrain makes it possible to adjust these parameters as needed and builds plots which can be used to estimate the best cutoffs for each project.
### What are inStrain’s computational requirements?¶
The two computational resources to consider when running inStrain are the number of processes given (-p) and the amount of RAM on the computer (usually not adjustable unless using cloud-based computing). Using inStrain v1.3.3, running inStrain on a .bam file of moderate size (1 Gbp of less) will generally take less than an hour with 6 cores, and use about 8Gb of RAM. InStrain is designed to handle large .bam files as well. Running a huge .bam file (30 Gbp) with 32 cores, for example, will take ~2 hours and use about 128Gb of RAM. The more processes you give inStrain the longer it will run, but also the more RAM it will use. See Important concepts for information on reducing compute requirements.
### How can I infer the relative abundance of each strain cluster within the metagenomes?¶
At the moment you can only compare the relative abundance of the populations between samples. Say strain A, based on genome X, is in samples 1 and 2. You now know that genome X is the same strain in both samples, so you could compare the relative abundance of genome X in samples 1 and 2. But if multiple strains are present within genome X, there’s no way to phase them out.
InStrain compare isn’t really phasing out multiple strains in a sample, it’s just seeing if there is micro-diversity overlap between samples. Conceptually inStrain operates on the idea of “strain clouds” more than distinct strains. InStrain isn’t able to tell the number of strains that are shared between two samples either, just that there is population-level overlap for some particular genome. Doing haplotype phasing is something we’ve considered and may add in the future, but the feature won’t be coming any time in the near future.
### How can I determine the relative abundance of detected populations?¶
Relative abundance can be calculated a number of different ways, but the way I like to do it “percentage of reads”. So if your sample has 100 reads, and 15 reads map to genome X, the relative abundance of genome X is 15%. Because inStrain does not know the number of reads per sample, it cannot calculate this metric for you. You have to calculate it yourself by dividing the total reads in the sample by the value filtered_read_pair_count reported in the inStrain genome_wide output.
### What mapping software can be used to generate .bam files for inStrain?¶
Bowtie2 is a common one the works well, but any software that generates .bam files should work. Some mapping software modifies .fasta file headers during mapping (including the tool BBMap and SNAP). Include the flag --use_full_fasta_header when mapping with these programs to properly handle this. | 2021-09-21 13:59:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4529295861721039, "perplexity": 2130.876632722973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.38/warc/CC-MAIN-20210921131252-20210921161252-00641.warc.gz"} |
https://www.esaral.com/q/the-area-of-a-sector-of-a-circle-of-radius-5-cm-28219 | # The area of a sector of a circle of radius 5 cm
Question:
The area of a sector of a circle of radius $5 \mathrm{~cm}$ is $5 \pi \mathrm{cm}^{2}$. Find the angle contained by the sector.
Solution:
We know that the area A of a sector of an angle θ in the circle of radius r is given by
$A=\frac{\theta}{360^{\circ}} \times \pi r^{2}$
It is given that radius $r=5 \mathrm{~cm}$ and area $A=5 \pi \mathrm{cm}^{2}$.
Now we substitute the value of r and A in above formula to find the value of θ,
$5 \pi=\frac{\theta}{360^{\circ}} \times \pi \times 5 \times 5$
$\theta=\frac{360^{\circ} \times 5 \pi}{\pi \times 5 \times 5}$
$=72^{\circ}$ | 2023-03-22 09:44:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9710375666618347, "perplexity": 140.11895671653627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00457.warc.gz"} |
https://math.stackexchange.com/questions/1162182/every-planar-graph-can-be-embedded-on-a-sphere-formal-proof | # Every planar graph can be embedded on a sphere - formal proof?
The proof of the following theorem:
A graph can be embedded on the surface of a sphere without crossings if and only if it can be embedded in the plane without crossings.
is very short-
The plane is topologically a sphere with a missing point at the North pole.
Now, before I start: I do believe this theorem is true. My goal was to do some reflection on why I believe it's true. It's a good practice, we should always ourselves 'what makes me believe it?' and 'should I believe it?'. Similarly, we should ask: how much abstract thinking and intuition is allowed in mathematics? What if we reach wrong conclusions by using too much intuition to prove theorems - if so, where is the boundary? We take certain statements as axioms, use logical thinking and derive conclusions which we call theorems. And so on. This is the only proper way of doing mathematics.
Why should I consider this as a convincing argument? Topology is just a purely mathematical construct. Here, we have a real-world problem and want to solve it using the formal methods of mathematics. I don't trust simplified proofs based on intuition - what if I'm being fooled into thinking this is true for every planar graph? Even if the smartest person in the world "believed it", there may still exist a counterexample.
A proof is valid if it shows that, in this example, there doesn't exist a planar graph that cannot be drawn on a sphere.
Can we make it a bit more precise and formal? I believe the first step is to state the define the theorem in formal terms.
1. How to define edges drawn on a sphere, their intersections?
2. What allows us to model this problem in topological terms?
3. Why homeomorphism is believed to guarantee that if there are no intersections in topological space of the plane, then there aren't in topological space of the sphere without a point? Should we believe it?
What I'm concerned about is the transition from natural description of the problem to topological, formal one. We solve the problem in the domain of topology and assume that the problem stated in topological terms is equivalent to the original problem. In other words, we are assuming that topological formulation of the problem is equivalent to the original problem - this assumption is based on intuition!. If not, what axioms or theorems (which is what maths is made of) tell us we can do that - e.g. make a topological space out of a graph, solve the problem in topology domain, and come back to our graph-theory domain?
• What part of this strikes you as a 'real world problem'? Literally every meaning-carrying word in your initial theorem - 'graph', 'embedded', 'surface', 'sphere', 'crossing', and 'plane' - has a formal definition and meaning. If you're concerned about the transition away from the 'natural description' of the problem, that process has started and arguably completed well before this post even starts. – Steven Stadnicki Feb 25 '15 at 5:55
• (Also, one-character edits without content to bump your question back to the top of the site are generally frowned upon.) – Steven Stadnicki Feb 25 '15 at 5:58
• If I didn't do that, everyone has only a few minutes to notice my question. Nobody cares what's on page 2, 10, 100. Anyway, would you be so kind and comment the discussion between me and Mike Miller below? My major concern is why it's believed that topological graph theory correcly models the problem. – user216094 Feb 25 '15 at 20:38
• Look at it from the philosophical point of view. Now, is it a complete nonsense? If yes, why? – user216094 Mar 7 '15 at 22:46
Hint: Look up stereographic projection (e.g. wikipedia), and check to see that this carries non-overlapping edges in the plane to non-overlapping edges on the sphere, and vice versa, and also carries the shared endpoints of edges in the plane to shared endpoints of edges on the sphere and vice versa.
One can make a topological space out of a graph; then "drawing a graph on a surface without crossings" is the same thing as a topological embedding $G \to \Sigma$, where $\Sigma$ is the surface. Everything else relies on this - and this is the answer to your first two questions - so you should convince yourself of this.
Then if you have an embedding $G \to \Bbb R^2$, you can embed compose with the embedding $\Bbb R^2 \hookrightarrow S^2$ to get an embedding $G \to S^2$; and if $G \to S^2$ is an embedding, there has to be some point it misses (if your graph is finite, this is because injective maps from compact spaces to Hausdorff spaces are homeomorphisms onto their image; if the image was all of $S^2$, your graph would be homeomorphic to $S^2$, which it's not.), so compose with the stereographic projection $S^2 \setminus \{p\} \to \Bbb R^2$ to get an embedding $G \hookrightarrow \Bbb R^2$.
So $G$ embeds into $\Bbb R^2$ if and only if it embeds into $S^2$.
• What I'm concerned about is the transition from natural description of the problem to topological, formal one. We solve the problem in the domain of topology and assume that the problem stated in topological terms is equivalent to the original problem. In other words, we are assuming that topological formulation of the problem is equivalent to the original problem - this assumption is based on intuition!!!. If not, what axioms or theorems (which is what maths is made of) tell you we can do that - e.g. make a topological space out of a graph, solve the problem in topology domain, and come back? – user216094 Feb 24 '15 at 15:39
• @user216094 The original problem is not a mathematical statement - it is meaningless to talk about drawing a graph on a surface without crossings without defining what that means. So part of the problem becomes to actually appropriately define the problem - the first paragraph says "convince yourself" instead of "prove". If you can successfully define what the problem should mean, you should be able to translate it topologically. (That you can make a topological space out of a graph is a construction, not a theorem.) – user98602 Feb 24 '15 at 15:58
• So we don't know if our problem can be modelled accurately in topology, because it has been stated in informal terms. Then, intuition is telling us what it should mean in mathematical terms. We only assume it's correct. What if I found a counterexample convincing you it's not a correct formalization of the problem? You cannot guarantee I can't do that. – user216094 Feb 24 '15 at 18:06
• I cannot convince myself of this mathematical statement's validity, because I don't know if I should. Suppose I gave you a slightly modified, wrong, model of the problem, but the fact that it's wrong would be hard to discover (you would be 100% sure it's a wrong model once I'd showed you its flaw). But at first, you would be convinced the mathematical model is perfectly correct... – user216094 Feb 24 '15 at 18:48
Place your hand out flat on the table. Now imagine a new 'thing' that is connected to all of your fingertips (including your thumb). When you contract that 'thing' down to a single point, it pulls your fingertips together. Now what effect does this have on the shape of your whole hand? And what would happen to a graph drawn on the back of your hand (including along the back of your fingers) when you did so?
Now imagine you had infinitely many fingertips on fingers with zero length - so your fingertips are simply the points at the edge of your hand. And imagine what would happen to your hand if you tried the same trick as with your real hand.
This is the same thing done with the embeddings. First you define the graph embedding on the plane. Then you take a point at infinity, which is connected to all infinite points, and collapse it to induce curvature in the surface. Then smooth out the curvature and voila, the sphere! To reverse the process, you simply remove the point at infinity (the 'north pole' point you mentioned) and expand all its connected points to give the boundary. Then reduce the curvature until you have the flat surface back, and voila, the plane! | 2019-06-18 07:17:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7531327605247498, "perplexity": 280.94894323108144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998690.87/warc/CC-MAIN-20190618063322-20190618085322-00233.warc.gz"} |
https://forum.rw4all.com/t/javascript-support-in-poster-2-markdown-post/4591 | # Javascript support in Poster 2 markdown post
I want to insert LaTeX maths into blog posts using MathJax | Beautiful math in all browsers.
I can do this at rapidweaver level by inserting the following code in the headers of all pages:
<script
id="MathJax-script"
async
src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"
></script>
On the following blog page
Second Markdown post using LaTeX
the first part is hard coded HTML into the general Poster 2 page. The symbol alpha is well rendered.
Below, the post content comes from a markdown text file which contains similar text. Here, the Mathjax code (i.e. (\alpha) ) is not parsed.
Can this be fixed? Or does the Markdown renderer intercept the ( and ) commands?
TIA,
François
[quote=“snorky22, post:1, topic:4591”]
I re-post the end of my question with correct code
Below, the post content comes from a markdown text file which contains similar text. Here, the Mathjax code (i.e. $$\alpha$$ ) is not parsed.
Can this be fixed? Or does the Markdown renderer intercept the $$and $$ commands?
I am using PHP Markdown Extra inside Poster Stack, I don’t know if this library supports your request.
I think in Markdown you would need \$$\alpha\$$
2 Likes
Thanks, I will try doubling the backslash and tell you the result.
Escaping the \by a double backslash \\is a brilliant idea; it works!
Thanks a lot for your quick response
1 Like | 2022-11-27 12:20:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22791394591331482, "perplexity": 5337.626466185686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710237.57/warc/CC-MAIN-20221127105736-20221127135736-00067.warc.gz"} |
http://books.duhnnae.com/material/2017may2/149408812745-A-infty-structures-on-an-elliptic-curve-Alexander-Polishchuk.php | # $A {infty}$-structures on an elliptic curve
$A {infty}$-structures on an elliptic curve - Download this document for free, or read online. Document in PDF available to download.
Download or read this book online for free in PDF: $A {infty}$-structures on an elliptic curve
The main result of this paper is the proof of the -transversal part- of the homological mirror symmetry conjecture for an elliptic curve which states an equivalence of two $A {\infty}$-structures on the category of vector bundles on an elliptic curves. The proof is based on the study of $A {\infty}$-structures on the category of line bundles over a
Author: Alexander Polishchuk
Source: https://archive.org/ | 2017-10-19 07:42:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7633879780769348, "perplexity": 437.0137932324616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823255.12/warc/CC-MAIN-20171019065335-20171019085335-00474.warc.gz"} |
https://chemistry.stackexchange.com/questions/77403/why-is-carbonic-acid-a-weaker-acid-than-acetic-acid | # Why is carbonic acid a weaker acid than acetic acid?
Is it because the $\ce{OH}$ group next to $\ce{COOH}$ in carbonic acid donates electron density to the $\ce{COO-}$ by resonance, making it a stronger base, compared to a weaker electron donating inductive effect of $\ce{CH3}$ and thus its conjugate acid is weaker?
What does the resonance contribution look like in this case?
What makes resonance a stronger factor than induction?
Carbonic acid is not weaker than acetic. In fact, it is stronger, just as one might expect by looking at that extra electron-withdrawing substituent. The trouble is, carbonic acid is also pretty unstable. At any given moment, most of it exists as $\ce{CO2}$. This leads to much lower apparent acidity.
Think of it this way: you add some acetic acid to a carbonate salt. An equilibrium sets in, much closer to the starting compounds than to the products. In other words, you'll have just a very tiny amount of $\ce{H2CO3}$. But as soon as you have it, it starts decomposing, thus shifting the equilibrium more and more to the right.
3.6 ($\rm pK_{a1}$ for $\ce{H2CO3}$ only), 6.3 ($\rm pK_{a1}$ including $\ce{CO2(aq)}$)...
The conjugate base of acetic acid is more stronger than carbonic acid because carbonate has $$\ce{-OH}$$ which shows +R effect but acetate has $$\ce{-CH3}$$ which shows +I effect. | 2021-12-03 04:55:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7241404056549072, "perplexity": 1617.6145044594637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362589.37/warc/CC-MAIN-20211203030522-20211203060522-00170.warc.gz"} |
http://en.wikipedia.org/wiki/Elliptical_distribution | # Elliptical distribution
In probability and statistics, an elliptical distribution is any member of a broad family of probability distributions that generalize the multivariate normal distribution. Intuitively, in the simplified two and three dimensional case, the joint distribution forms an ellipse and an ellipsoid, respectively, in iso-density plots.
## Definition
Elliptical distributions can be defined using characteristic functions. A multivariate distribution is said to be elliptical if its characteristic function is of the form[1]
$e^{it'\mu} \Psi(t' \Sigma t) \,$
for a specified vector $\mu$, positive-definite matrix $\Sigma$, and characteristic function $\Psi$. The function $\Psi$ is known as the characteristic generator of the elliptical distribution.[2]
Elliptical distributions can also be defined in terms of their density functions. When they exist, the density functions f have the structure:
$f(x)= k \cdot g((x-\mu)'\Sigma^{-1}(x-\mu))$
where $k$ is the scale factor, $x$ is an $n$-dimensional random vector with median vector $\mu$ (which is also the mean vector if the latter exists), $\Sigma$ is a positive definite matrix which is proportional to the covariance matrix if the latter exists, and $g$ is a function mapping from the non-negative reals to the non-negative reals giving a finite area under the curve.[3]
## Properties
In the 2-dimensional case where the density exists, each iso-density locus (the set of x1,x2 pairs all giving a particular value of $f(x)$) is an ellipse (hence the name elliptical distribution). More generally, for arbitrary n the iso-density loci are ellipsoids.
The multivariate normal distribution is the special case in which $g(z)=e^{-z/2}$ for quadratic form $z$. While the multivariate normal is unbounded (each element of $x$ can take on arbitrarily large positive or negative values with non-zero probability, because $e^{-z/2}>0$ for all non-negative $z$), in general elliptical distributions can be bounded or unbounded—such a distribution is bounded if $g(z)=0$ for all $z$ greater than some value.
Note that there exist elliptical distributions that have infinite mean and variance, such as the multivariate [Student's t-distribution] or the multivariate Cauchy distribution .[4]
Because the index variable x enters the density function quadratically, all elliptical distributions are symmetric about $\mu.$
## Applications
Elliptical distributions are important in portfolio theory because, if the returns on all assets available for portfolio formation are jointly elliptically distributed, then all portfolios can be characterized completely by their location and scale – that is, any two portfolios with identical location and scale of portfolio return have identical distributions of portfolio return (Chamberlain 1983; Owen and Rabinovitch 1983). For multi-normal distributions, location and scale correspond to mean and standard deviation.
## References
1. ^ Stamatis Cambanis, Steel Huang, and Gordon Simons (1981). "On the Theory of Elliptically Contoured Distributions". Journal of Multivariate Analysis 11: 368–385. doi:10.1016/0047-259x(81)90082-8.
2. ^ Härdle and Simar (2012), p. 178.
3. ^ Frahm, G., Junker, M., & Szimayer, A. (2003). Elliptical copulas: applicability and limitations. Statistics & Probability Letters, 63(3), 275-286.
4. ^ Z. Landsman, E. Valdez, Tail conditional expectations for elliptical distribution North Am. Actuarial J., 7 (4) (2003), pp. 55–71
• Wolfgang Karl. Härdle and Lèopold Simar (2012). Applied Multivariate Statistical Analysis (3rd ed.). Springer.
• Fang, K. Kotz, S. and Ng., K. (1990). Symmetric Multivariate and Related Distributions. London: Chapman & Hall.
• McNeil, Alexander; Frey, Rüdiger; Embrechts, Paul (2005). Quantitative Risk Management. Princeton University Press. ISBN 0-691-12255-5.
• Chamberlain, G. (1983). "A characterization of the distributions that imply mean-variance utility functions", Journal of Economic Theory 29, 185-201. doi:10.1016/0022-0531(83)90129-1
• Landsman, Zinoviy M.; Valdez, Emiliano A. (2003) Tail Conditional Expectations for Elliptical Distributions (with discussion), The North American Actuarial Journal, 7, 55–123.
• Owen, J., and Rabinovitch, R. (1983). "On the class of elliptical distributions and their applications to the theory of portfolio choice", Journal of Finance 38, 745-752. JSTOR 2328079 | 2014-07-28 09:19:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 21, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8795036673545837, "perplexity": 1237.7894268573118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510257966.18/warc/CC-MAIN-20140728011737-00400-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://codereview.stackexchange.com/questions/195839/order-loaded-modules-by-amount-of-functions/196292 | # Order loaded modules by amount of functions
I am trying to solve some exercises from the "Programming Erlang" book.
One of them is "Write a function to determine which module exports the most functions"
This is my solution:
-module(module_info).
-export([modules_ordered_by_amount_of_functions/0]).
modules_ordered_by_amount_of_functions() ->
SortedModules = lists:sort(fun compare_functions_length/2, Modules),
lists:map(fun extract_amount_of_functions/1, SortedModules).
extract_module_name({ModuleName, _}) -> ModuleName.
extract_functions_from_module(ModuleName) ->
[_, {exports, Functions} | _] = ModuleName:module_info(),
{ModuleName, Functions}.
compare_functions_length ({_, FunctionsA}, {_, FunctionsB}) ->
length(FunctionsA) >= length(FunctionsB).
extract_amount_of_functions ({ModuleName, Functions}) ->
{ModuleName, length(Functions)}.
It doesn't really feel elegant to me.
Breaking the problem into tiny, useful functions is a very good way to write Erlang code. So your code is easy to read and gives us a great start in understanding and analyzing it.
The one thing I notice is that, in loaded_modules_with_functions, you map a function over a list and then map a function on the resulting list. This creates an intermediate list which will just be thrown away (GCed.) I'd combine the two operations so there's only one pass through the list. Since the result of this function is the resulting list, I'd use a list comprehension:
loaded_modules_with_functions() ->
[ extract_functions_from_module(ModName) || {ModName, _} <- code:all_loaded() ].
In extract_functions_from_module, you're making an assumption that the export list is always the second element. Maybe this is the case. Or maybe in your OTP release this is always the case. The next release, however, may break your code if module_info adds more information. You can use the proplists module (in the standard library) to generalize the look-up:
extract_functions_from_module(ModuleName) ->
{ModuleName, proplists:get_value(exports, ModuleName:module_info(), [])}.
Finally, I'd put a list comprehension at the end of modules_ordered_by_amount_of_functions and inline extract_amount_of_functions. | 2020-03-30 14:24:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31760159134864807, "perplexity": 3502.0001641916847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497042.33/warc/CC-MAIN-20200330120036-20200330150036-00315.warc.gz"} |
http://jaac.ijournal.cn/ch/reader/view_abstract.aspx?journal_id=jaac&file_no=201707120000001&flag=3 | For REFEREES
Global regularity for 3D generalized Hall magneto-hydrodynamics equations Keywords:Hall magneto-hydrodynamics equations, global regularity, hyperdissipation, Littlewood-Paley decomposition. Abstract: For the 3D incompressible Hall magneto-hydrodynamics equations, global regularity of the weak solutions is not established so far. The major difficulty is that the dissipation given by the Laplacian operator is insufficient to control the nonlinearity. Wan obtained the global regularity of the 3D generalized Hall-MHD equations with critical and subcritical hyperdissipation regimes $m_{1}(\xi)=|\xi|^{\alpha}$, $m_{2}(\xi)=|\xi|^{\beta}$ for $\alpha\geq\frac{5}{4}$, $\beta\geq\frac{7}{4}$. We improve this slightly by making logarithmic reductions in the dissipation and still obtain the global regularity. More precisely, the hyperdissipation regimes in our system are $m_{1}(\xi)\geq\frac{|\xi|^{\alpha}}{g_{1}(\xi)}$, and $m_{2}(\xi)\geq\frac{|\xi|^{\beta}}{g_{2}(\xi)}$ for some non-decreasing functions $g_{1}$ and $g_{2}$: $\mr^{+}\rightarrow\mr^{+}$ such that $\int_{1}^{\infty}\frac{1}{s(g_{1}^{2}(s)+g_{2}^{2}(s))^{2}}\md s=+\infty.$ | 2018-03-22 10:14:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7187260985374451, "perplexity": 819.9523677375873}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647838.64/warc/CC-MAIN-20180322092712-20180322112712-00632.warc.gz"} |
http://advancedintegrals.com/tag/hypergeoemtric/ | # Tag Archives: hypergeoemtric
## Euler Hypergeometric transformation proof
$${}_2F_1 \left(a,b;c;z\right)=(1-z)^{c-a-b} {}_2F_1 \left(c-a,c-b;c;z\right)$$ $$\textit{proof}$$ In the Pfaff transformations let $z \to \frac{z}{z-1}$ , proved here $${}_2F_1 \left(a,b;c;\frac{z}{z-1}\right)=(1-z)^{-a} {}_2F_1 \left(a,c-b;c;z\right)$$ and $${}_2F_1 \left(a,b;c;\frac{z}{z-1}\right)=(1-z)^{-b} {}_2F_1 \left(c-a,b;c;z\right)$$ By equating the two transformations $$(1-z)^{-a} {}_2F_1 \left(a,c-b;c;z\right)=(1-z)^{-b} {}_2F_1 \left(c-a,b;c;z\right)$$ Now use the transformation … Continue reading | 2018-11-17 10:38:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9405049085617065, "perplexity": 3293.3307216189874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743353.54/warc/CC-MAIN-20181117102757-20181117124757-00381.warc.gz"} |
https://study.com/academy/answer/a-a-body-cools-down-from-50-deg-c-to-45-deg-c-in-5-minutes-and-to-40-deg-c-in-another-8-minutes-find-the-temperature-of-the-surrounding-b-a-calorimeter-contains-50-g-of-water-at-50-deg-c-the-tem.html | # a) A body cools down from 50 deg C to 45 deg C in 5 minutes and to 40 deg C in another 8 minutes....
## Question:
a) A body cools down from 50°C to 45°C in 5 minutes and to 40°C in another 8 minutes. Find the temperature of the surrounding.
b) A calorimeter contains 50 g of water at 50°C. The temperature falls to 45°C in 10 minutes. When the calorimeter contains 100 g of water at 50°C, it takes 18 minutes for the temperature to become 45°C. Find the water equivalent of the calorimeter.
## Newton's law of cooling:
Newton's law of cooling states that, as the subtraction of temperature between the object and surrounding increases, the rate of heat change increases. When the transfer of heat takes place through radiation, change in temperature is minimum.
(a)
Given data
• The initial temperature of body is {eq}{T_1} = {50^{\rm{o}}}{\rm{C}} {/eq}.
• The temperature of body is {eq}{T_2} = {45^{\rm{o}}}{\rm{C}} {/eq}.
• The time taken to fall in temperature is {eq}{t_1} = 5\;\min {/eq}.
• The final temperature of body is {eq}{T_3} = {40^{\rm{o}}}{\rm{C}} {/eq}.
• The time taken to fall in temperature is {eq}{t_2} = 8\;\min {/eq}.
The expression for the newton?s law of cooling is given as,
{eq}\dfrac{{dT}}{{dt}} = - K\left[ {{T_\infty } - {T_s}} \right]......\left( 1 \right) {/eq}
In second case,
The average temperature is,
{eq}\begin{align*} {T_{av,2}} &= \dfrac{{45 + 40}}{2}\\ {T_{av,2}} &= {42.5^{\rm{o}}}{\rm{C}} \end{align*} {/eq}
The difference in the temperature of body and surrounding is,
{eq}\Delta T = {\left( {42.5 - {T_o}} \right)^{\rm{o}}}{\rm{C}} {/eq}
The rate of fall of temperature is,
{eq}\begin{align*} \dfrac{{dT}}{{dt}} &= \dfrac{{\Delta T}}{t}\\ \dfrac{{dT}}{{dt}} &= \dfrac{{45 - 40}}{8}\\ \dfrac{{dT}}{{dt}} &= {\dfrac{5}{8}^{\rm{o}}}{\rm{C/min}} \end{align*} {/eq}
Substituting the values in equation 1,
{eq}\dfrac{{dT}}{{dt}} = - K\left( {42.5 - {T_o}} \right)......\left( 2 \right) {/eq}
In Case 1,
The average temperature is,
{eq}\begin{align*} {T_{av}} &= \dfrac{{50 + 45}}{2}\\ {T_{av}} &= {47.5^{\rm{o}}}{\rm{C}} \end{align*} {/eq}
The difference in the temperature of body and surrounding is,
{eq}\Delta T = {\left( {47.5 - {T_o}} \right)^{\rm{o}}}{\rm{C}} {/eq}
The rate of fall of temperature is,
{eq}\begin{align*} \dfrac{{dT}}{{dt}} &= \dfrac{{\Delta T}}{t}\\ \dfrac{{dT}}{{dt}} &= \dfrac{{50 - 45}}{5}\\ \dfrac{{dT}}{{dt}} &= {1^{\rm{o}}}{\rm{C/min}} \end{align*} {/eq}
Substituting the values in equation 1,
{eq}\dfrac{{dT}}{{dt}} = - K\left( {47.5 - {T_o}} \right)......\left( 3 \right) {/eq}
Divining equation 2 and 3,
{eq}\begin{align*} \dfrac{1}{{\dfrac{5}{8}}} &= \dfrac{{47.5 - {T_o}}}{{42.5 - {T_o}}}\\ 0.625\left( {47.5 - {T_o}} \right) &= 42.5 - {T_o}\\ {T_o} &= {34.1^{\rm{o}}}{\rm{C}} \end{align*} {/eq}
Thus, the temperature of the surrounding is {eq}{34.1^{\rm{o}}}{\rm{C}} {/eq}.
(b)
Given data
• The mass of water in calorimeter is {eq}{m_1} = 50\;{\rm{g}} {/eq}.
• The temperature of the water is {eq}{t_{w,i}} = {50^{\rm{o}}}{\rm{C}} {/eq}.
• The temperature of water decreases to {eq}{t_{w,f}} = {45^{\rm{o}}}{\rm{C}} {/eq}.
• The time taken for the change in temperature is {eq}{t_1} = 10\;{\rm{min}} {/eq}.
• The mass of water in calorimeter is {eq}{m_2} = 100\;{\rm{g}} {/eq}.
• The time taken for the change in temperature is {eq}{t_2} = 18\;{\rm{min}} {/eq}.
The expression for the rate of heat flow is given as,
{eq}q = \dfrac{{ms\Delta t}}{t}......\left( 1 \right) {/eq}
Substituting the values in equation 1, for rate of heat flow,
{eq}{q_1} = \dfrac{{\left( {w + 50 \times {{10}^{ - 3}}} \right) \times 4200 \times 5}}{{10}}......\left( 2 \right) {/eq}
Substituting the values in equation 1, for rate of heat flow,
{eq}{q_2} = \dfrac{{\left( {w + 100 \times {{10}^{ - 3}}} \right) \times 4200 \times 5}}{{18}}......\left( 3 \right) {/eq}
Equating equation 2 and 3,
{eq}\begin{align*} \dfrac{{\left( {w + 50 \times {{10}^{ - 3}}} \right) \times 4200 \times 5}}{{10}} &= \dfrac{{\left( {w + 100 \times {{10}^{ - 3}}} \right) \times 4200 \times 5}}{{18}}\\ 18\left( {w + 50 \times {{10}^{ - 3}}} \right) &= 10\left( {w + 100 \times {{10}^{ - 3}}} \right)\\ 18w - 10w &= 1 - 0.9\\ w &= 12.5 \times {10^{ - 3}}\;{\rm{kg}} \end{align*} {/eq}
Thus, the water equivalent of the calorimeter is {eq}12.5\;{\rm{g}} {/eq}. | 2020-03-31 08:33:35 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000098943710327, "perplexity": 6086.025572606167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500331.13/warc/CC-MAIN-20200331053639-20200331083639-00499.warc.gz"} |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=15&t=33973 | Calculating Frequency of Light
$E=hv$
Clarissa Cabil 1I
Posts: 66
Joined: Fri Sep 28, 2018 12:19 am
Calculating Frequency of Light
For a worked example during his atomic spectrum lecture and module, the question asked us to calculate frequency of light emitted by a hydrogen atom when an electron makes a transition from the 4th to 2nd principal quantum level. I understand that we first need to calculate the energies of both the 4th and 2nd levels in order to find the difference in energy (which is negative). But next, when using the frequency = energy/Planck's constant equation, why do we make the energy positive? Is frequency always supposed to be a positive value?
yea-lyn pak_1G
Posts: 30
Joined: Fri Sep 28, 2018 12:18 am
Re: Calculating Frequency of Light
So we make the energy positive because the change in energy of the electron is equal to the energy of the light emitted. When an electron is dropped from a higher energy level to a lower energy level, the electron loses some energy (this is delta E, or change in E). This energy is lost because it's emitted as light. So the change in E is equal to the E of the light emitted. So when plugging in E into the E=hv equation (in Professor Lavelle's example), E would be positive because this is simply the energy of the light emitted.
Hope this helps!
daisyjimenezt
Posts: 30
Joined: Fri Sep 28, 2018 12:26 am
Re: Calculating Frequency of Light
I think it's because the energy that is released is released as electromagnetic radiation, which should be positive..
daniella_knight1I
Posts: 57
Joined: Fri Sep 28, 2018 12:18 am
Re: Calculating Frequency of Light
The values are going to be positive because the light emitted is equal to the energy "lost" by the electron when it dropped levels. Although that may be seen as negative and confusing, it balances out. | 2020-11-25 04:01:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6568500399589539, "perplexity": 618.4780716484747}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141180636.17/warc/CC-MAIN-20201125012933-20201125042933-00170.warc.gz"} |
https://homework.cpm.org/category/CON_FOUND/textbook/caac/chapter/12/lesson/12.4.3/problem/12-90 | ### Home > CAAC > Chapter 12 > Lesson 12.4.3 > Problem12-90
12-90.
The graphs of several relations are shown below. Decide if each is a function. If the relation is not a function, explain why not.
1. A relation is only a function if there is only one $y$-value for each $x$-value. Are there any $x$-values on this graph that have multiple $y$-values?
This relation is a function.
1. See the help for part (a).
This relation is not a function. Be sure to explain why.
1. See the help for part (a). | 2020-07-12 04:17:16 | {"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6207322478294373, "perplexity": 839.7041223074185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657129517.82/warc/CC-MAIN-20200712015556-20200712045556-00557.warc.gz"} |
http://cnx.org/content/m11765/latest/?collection=col10223/latest | # Connexions
You are here: Home » Content » ECE 301 Projects Fall 2003 » Feature Detection Test for Fish Classification
### Lenses
What is a lens?
#### Definition of a lens
##### Lenses
A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.
##### What is in a lens?
Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.
##### Who can create a lens?
Any individual member, a community, or a respected organization.
##### What are tags?
Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.
#### Affiliated with (What does "Affiliated with" mean?)
This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• Rice University ELEC 301 Projects
This collection is included inLens: Rice University ELEC 301 Project Lens
By: Rice University ELEC 301
Click the "Rice University ELEC 301 Projects" link to see all content affiliated with them.
• Rice Digital Scholarship
This collection is included in aLens by: Digital Scholarship at Rice University
Click the "Rice Digital Scholarship" link to see all content affiliated with them.
#### Also in these lenses
• Lens for Engineering
This module and collection are included inLens: Lens for Engineering
By: Sidney Burrus
Click the "Lens for Engineering" link to see all content selected in this lens.
• EW Public Lens
This collection is included inLens: Ed Woodward's Public Lens
By: Ed Woodward
"assafdf"
Click the "EW Public Lens" link to see all content selected in this lens.
Click the tag icon to display tags associated with this content.
### Recently Viewed
This feature requires Javascript to be enabled.
### Tags
(What is a tag?)
These tags come from the endorsement, affiliation, and other lenses that include this content.
Inside Collection (Course):
# Feature Detection Test for Fish Classification
Module by: Kyle Clarkson, Jason Sedano, Ian Clark. E-mail the authors
Summary: This test determines whether a fish is a salmon or a trout based on different color features that are detected using the 2-D DWT
One of the most important features of different fish are the colors of their heads, tails, and bodies. This test breaks down the different color matricies into blocks of similar colors and uses them to detect what color the different parts of the fish's body are.
The first part of the process is to run the 2-D DWT on each of the color matricies. It is run 3 times so that the resulting picture is 1/8 of the resolution of the original matricies, and there are only high values where the color is relatively constant for a large area. This essentially provides a method for Low-Pass filtering the picture and finding only large blocks of color.
Next, the picture is filtered by dropping any values that are lower than a threshold and setting any values over that threshold to 1. This drops all areas in the picture that are not very intense, or where the values are not constant for a large area. Now, the picture has only ones whereever the large blocks of color are.
The next step is to count all the different blocks of ones, which is done by using the Matlab command, bwlabel. Next, each block is examined one by one to see what size it is, and where in the picture its located. From this, it can be determined what color the body, head, and tail of the fish is. If they match the pattern for either type of fish, then the test classifies it as that type. Because this is the hardest test to satisfy, it is also the most heavily weighted test in the entire process.
## Example 1: Code for Feature Detection
% This function takes in a 3D image matrix of three colors, red, green, and blue and uses the
% 2-D DWT to low poss filter them and decrease their resolution. It then looks for blocks of
% color and outputs a matrix with the size, and location of each of the different features of
% each color. These can then be analyzed to see if they show evidence of a specific fish type.
function [rfeats,gfeats,bfeats] = featuredet(image)
redimage = image(:,:,1);
greenimage = image(:,:,2);
blueimage = image(:,:,3);
rfeats = [0 0 0 0 0 0 0 0];
gfeats = [0 0 0 0 0 0 0 0];
bfeats = [0 0 0 0 0 0 0 0];
% Run the 2-D DWT on the different colors to reduce the resolution of the picture and
% effectively low-pass filter the image.
dwtr = dwt2(redimage, 'haar');
dwtr2 = dwt2(dwtr, 'haar');
dwtr3 = dwt2(dwtr2, 'haar');
%dwtr3feat = (dwtr3 > 6)
dwtr4 = dwt2(dwtr3, 'haar');
%dwtr4feat = (dwtr4 > 13);
dwtg = dwt2(greenimage, 'haar');
dwtg2 = dwt2(dwtg, 'haar');
dwtg3 = dwt2(dwtg2, 'haar');
dwtg4 = dwt2(dwtg3, 'haar');
dwtb = dwt2(blueimage, 'haar');
dwtb2 = dwt2(dwtb, 'haar');
dwtb3 = dwt2(dwtb2, 'haar');
dwtb4 = dwt2(dwtb3, 'haar');
% Set everything below a threshold to 0 and everything above to 1 and then
% number every group of ones in the binary image
[redfeatures, numred] = bwlabel(dwtr3>5);
[greenfeatures, numgreen] = bwlabel(dwtg3>5);
[bluefeatures, numblue] = bwlabel(dwtb3>5);
% Cycle through each different feature and find its location and size
for a = 1:numred
rowval = sum(redfeatures==a);
colval = sum((redfeatures==a)')';
sizeval = size(redfeatures);
j = 1;
left = 0;
while rowval(j)<1
left = j;
j = j+1;
end
j = 1;
right = sizeval(2);
while rowval(sizeval(2)-j+1)<1
right = sizeval(2)-j+1;
j = j+1;
end
j = 1;
top = 0;
while colval(j)<1
top = j;
j = j+1;
end
j = 1;
bottom = sizeval(1);
while colval(sizeval(1)-j+1)<1
bottom = sizeval(1)-j+1;
j = j+1;
end
sumval = sum(rowval);
rfeats(a,:) = [top bottom bottom-top left right right-left (right-left)./(bottom-top) sumval];
end
for b = 1:numgreen
rowval = sum(greenfeatures==b);
colval = sum((greenfeatures==b)')';
sizeval = size(greenfeatures);
j = 1;
while rowval(j)<1
left = j;
j = j+1;
end
j = 1;
while rowval(sizeval(2)-j+1)<1
right = sizeval(2)-j+1;
j = j+1;
end
j = 1;
while colval(j)<1
top = j;
j = j+1;
end
j = 1;
while colval(sizeval(1)-j+1)<1
bottom = sizeval(1)-j+1;
j = j+1;
end
sumval = sum(rowval);
gfeats(b,:) = [top bottom bottom-top left right right-left (right-left)./(bottom-top) sumval];
end
for c = 1:numblue
rowval = sum(bluefeatures==c);
colval = sum((bluefeatures==c)')';
sizeval = size(bluefeatures);
j = 1;
while rowval(j)<1
left = j;
j = j+1;
end
j = 1;
while rowval(sizeval(2)-j+1)<1
right = sizeval(2)-j+1;
j = j+1;
end
j = 1;
while colval(j)<1
top = j;
j = j+1;
end
j = 1;
while colval(sizeval(1)-j+1)<1
bottom = sizeval(1)-j+1;
j = j+1;
end
sumval = sum(rowval);
bfeats(c,:) = [top bottom bottom-top left right right-left (right-left)./(bottom-top) sumval];
end
## Example 2: Code for Feature Analysis
% This function takes a 3D image and runs the feature detector on it, which gives matricies
% containing the sizes and shapes of the different features. It then decides what color the
% fish's body, head, and tail are, or whether the can't be determined by the features.
[rfeats,gfeats,bfeats] = featuredet(fishimage);
impfeats = [0 0 0; 0 0 0; 0 0 0];
% This section takes each feature located by the feature detector and decides if they
% are evidence or a body, head, or tail of the fish being that color.
for a = 1:size(rfeats,1)
% If the feature is extremely long, it is a body
if and(rfeats(a,6)>30, rfeats(a,8)>30)
impfeats(1,1) = 1;
end
% If the feature is far to the right, it is a head
if and(rfeats(a,4)>38, rfeats(a,8)>20)
impfeats(1,2) = 1;
end
% If the features is far to the left, it is a tail
if and(rfeats(a,5)<25, rfeats(a,8)>6)
impfeats(1,3) = 1;
end
end
for b = 1:size(gfeats,1)
if and(gfeats(b,6)>30, gfeats(b,8)>30)
impfeats(2,1) = 1;
end
if and(gfeats(b,4)>38, gfeats(b,8)>10)
impfeats(2,2) = 1;
end
if and(gfeats(b,5)<25, gfeats(b,8)>6)
impfeats(2,3) = 1;
end
end
for c = 1:size(bfeats,1)
if and(bfeats(c,6)>30, bfeats(c,8)>30)
impfeats(3,1) = 1;
end
if and(bfeats(c,4)>38, bfeats(c,8)>10)
impfeats(3,2) = 1;
end
if and(bfeats(c,5)<25, bfeats(c,8)>6)
impfeats(3,3) = 1;
end
end
% This section looks at each of the columns of the feature matrix and then
% outputs which color pattern they are.
if and(impfeats(1,1) == 1, and(impfeats (2,1) == 1, impfeats (3,1) == 1))
body = 'rgb';
end
if and(impfeats(1,1) == 1, and(impfeats (2,1) == 1, impfeats (3,1) == 0))
body = 'rg ';
end
if and(impfeats(1,1) == 1, and(impfeats (2,1) == 0, impfeats (3,1) == 1))
body = 'rb ';
end
if and(impfeats(1,1) == 1, and(impfeats (2,1) == 0, impfeats (3,1) == 0))
body = 'r ';
end
if and(impfeats(1,1) == 0, and(impfeats (2,1) == 1, impfeats (3,1) == 1))
body = 'gb ';
end
if and(impfeats(1,1) == 0, and(impfeats (2,1) == 1, impfeats (3,1) == 0))
body = 'g ';
end
if and(impfeats(1,1) == 0, and(impfeats (2,1) == 0, impfeats (3,1) == 1))
body = 'b ';
end
if and(impfeats(1,1) == 0, and(impfeats (2,1) == 0, impfeats (3,1) == 0))
body = 'cbd';
end
if and(impfeats(1,2) == 1, and(impfeats (2,2) == 1, impfeats (3,2) == 1))
end
if and(impfeats(1,2) == 1, and(impfeats (2,2) == 1, impfeats (3,2) == 0))
end
if and(impfeats(1,2) == 1, and(impfeats (2,2) == 0, impfeats (3,2) == 1))
end
if and(impfeats(1,2) == 1, and(impfeats (2,2) == 0, impfeats (3,2) == 0))
end
if and(impfeats(1,2) == 0, and(impfeats (2,2) == 1, impfeats (3,2) == 1))
end
if and(impfeats(1,2) == 0, and(impfeats (2,2) == 1, impfeats (3,2) == 0))
end
if and(impfeats(1,2) == 0, and(impfeats (2,2) == 0, impfeats (3,2) == 1))
end
if and(impfeats(1,2) == 0, and(impfeats (2,2) == 0, impfeats (3,2) == 0))
end
if and(impfeats(1,3) == 1, and(impfeats (2,3) == 1, impfeats (3,3) == 1))
tail = 'rgb';
end
if and(impfeats(1,3) == 1, and(impfeats (2,3) == 1, impfeats (3,3) == 0))
tail = 'rg ';
end
if and(impfeats(1,3) == 1, and(impfeats (2,3) == 0, impfeats (3,3) == 1))
tail = 'rb ';
end
if and(impfeats(1,3) == 1, and(impfeats (2,3) == 0, impfeats (3,3) == 0))
tail = 'r ';
end
if and(impfeats(1,3) == 0, and(impfeats (2,3) == 1, impfeats (3,3) == 1))
tail = 'gb ';
end
if and(impfeats(1,3) == 0, and(impfeats (2,3) == 1, impfeats (3,3) == 0))
tail = 'g ';
end
if and(impfeats(1,3) == 0, and(impfeats (2,3) == 0, impfeats (3,3) == 1))
tail = 'b ';
end
if and(impfeats(1,3) == 0, and(impfeats (2,3) == 0, impfeats (3,3) == 0))
tail = 'cbd';
end
## Content actions
PDF | EPUB (?)
### What is an EPUB file?
EPUB is an electronic book format that can be read on a variety of mobile devices.
PDF | EPUB (?)
### What is an EPUB file?
EPUB is an electronic book format that can be read on a variety of mobile devices.
#### Collection to:
My Favorites (?)
'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.
| A lens I own (?)
#### Definition of a lens
##### Lenses
A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.
##### What is in a lens?
Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.
##### Who can create a lens?
Any individual member, a community, or a respected organization.
##### What are tags?
Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.
| External bookmarks
#### Module to:
My Favorites (?)
'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.
| A lens I own (?)
#### Definition of a lens
##### Lenses
A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.
##### What is in a lens?
Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.
##### Who can create a lens?
Any individual member, a community, or a respected organization.
##### What are tags?
Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.
| External bookmarks | 2013-05-26 01:14:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20534810423851013, "perplexity": 7730.981524888635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706477730/warc/CC-MAIN-20130516121437-00017-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://in-theory.blogspot.com/2008/01/finally.html | ## Thursday, January 24, 2008
### Finally!
After a hiatus of almost four year, the graduate computational complexity course returns to Berkeley.
To get started, I proved Cook's non-deterministic hierarchy theorem, a 1970s result with a beautifully clever proof, which I first learned from Sanjeev Arora. (And that is not very well known.)
Though the full result is more general, say we want to prove that there is a language in NP that cannot be solved by non-deterministic Turing machines in time $o(n^3)$.
(If one does not want to talk about non-deterministic Turing machines, the same proof will apply to other quantitative restrictions on NP, such as bounding the length of the witness and the running time of the verification.)
In the deterministic case, where we want to find a language in P not solvable in time $o(n^3)$, it's very simple. We define the language $L$ that contains all pairs $(\langle T\rangle,x)$ where: (i) $T$ is a Turing machine, (ii) $x$ is a binary string, (iii) $T$ rejects the input $(\langle T\rangle,x)$ within $|(\langle T\rangle,x)|^3$ steps, where $|z|$ denotes the length of a string $z$.
It's easy to see that $L$ is in P, and it is also easy to see that if a machine $M$ could decide this problem in time $\leq n^3$ on all sufficiently large inputs, then the behavior of $M$ on input $\langle M\rangle,x$, for every $x$ long enough, leads to a contradiction.
We could try the same with NP, and define $L$ to contain pairs $(\langle T\rangle,x)$ such that $T$ is a non-deterministic Turing machine that has no accepting path of length $\leq |\langle T\rangle,x|^3$ on input $(\langle T\rangle,x)$. It would be easy to see that $L$ cannot be solved non-deterministically in time $o(n^3)$, but it's hopeless to prove that $L$ is in NP, because in order to solve $L$ we need to decide whether a given non-deterministic Turing machine rejects, which is, in general, a coNP-complete problem.
Here is Cook's argument. Define the function $f(k)$ as follows: $f(1):=2$, $f(k):= 2^{(1+f(k-1))^3}$. Hence, $f(k)$ is a tower of exponentials of height $k$. Now define the language $L$ as follows.
$L$ contains all pairs $\langleT \rangle,0^t$ where $\langle T\rangle$ is a non-deterministic Turing machine and $0^t$ is a sequence of $t$ zeroes such that one of the following conditions is satisfied
1. There is a $k$ such that $f(k)=t$, and $T$ has no accepting computation on input $\langle T\rangle,0^{1+f(k-1)}$ of running time $\leq (1+(f(k-1))^3$;
2. $t$ is not of the form $f(k)$ for any $k$, and $T$ has an accepting computation on input $\langle T\rangle,0^{1+t}$ of running time $\leq (t+1)^3$.
Now let's see that $L$ is in NP. When we are given an input $\langle T\rangle,0^t$ we can first check if there is a $k$ such that $f(k)=t$.
1. If there is, we can compute $t':=f(k-1)$ and deterministically simulate all computations of $T$ on inputs $\langle T\rangle,0^{t'}$ up to running time $t'^3$. This takes time $2^{O(t'^3)}$ which is polynomial in $t$.
2. Otherwise, we non-deterministically simulate $T$ on input $\langle T\rangle,0^{t+1}$ for up to $(t+1)^3$ steps. (And reject after time-out.)
In either case, we are correctly deciding the language.
Finally, suppose that $L$ could be decided by a non-deterministic Turing machine $M$ running in time $o(n^3)$. In particular, for all sufficiently large $t$, the machine runs in time $\leq t^3$ on input $\langle M\rangle,0^t$.
Choose $k$ to be sufficiently large so that for every $t$ in the interval $1+f(k-1),...,f(k)$ the above property is true.
Now we can see that $M$ accepts $(\langle M\rangle,0^{f(k-1)+1})$ if and only if $M$ accepts $(\langle M\rangle,0^{f(k-1)+2})$ if and only if ... if and only if $M$ accepts $(\langle M\rangle,0^{f(k)})$ if and only if $M$ rejects $(\langle M\rangle,0^{f(k-1)+1})$, and we have our contradiction. | 2014-07-22 23:36:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8897285461425781, "perplexity": 114.36441305654425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997869778.45/warc/CC-MAIN-20140722025749-00085-ip-10-33-131-23.ec2.internal.warc.gz"} |
http://www.fightfinance.com/?q=115,157,233,365,370,405,614,707,714,848 | # Fight Finance
#### CoursesTagsRandomAllRecentScores
A firm has a debt-to-assets ratio of 50%. The firm then issues a large amount of debt to raise money for new projects of similar market risk to the company's existing projects. Assume a classical tax system. Which statement is correct?
A 90-day Bank Accepted Bill has a face value of $1,000,000. The interest rate is 6% pa and there are 365 days in the year. What is its price? A four year bond has a face value of$100, a yield of 9% and a fixed coupon rate of 6%, paid semi-annually. What is its price?
Stocks in the United States usually pay quarterly dividends. For example, the software giant Microsoft paid a $0.23 dividend every quarter over the 2013 financial year and plans to pay a$0.28 dividend every quarter over the 2014 financial year.
Using the dividend discount model and net present value techniques, calculate the stock price of Microsoft assuming that:
• The time now is the beginning of July 2014. The next dividend of $0.28 will be received in 3 months (end of September 2014), with another 3 quarterly payments of$0.28 after this (end of December 2014, March 2015 and June 2015).
• The quarterly dividend will increase by 2.5% every year, but each quarterly dividend over the year will be equal. So each quarterly dividend paid in the financial year beginning in September 2015 will be $0.287 $(=0.28×(1+0.025)^1)$, with the last at the end of June 2016. In the next financial year beginning in September 2016 each quarterly dividend will be$0.294175 $(=0.28×(1+0.025)^2)$, with the last at the end of June 2017, and so on forever.
• The total required return on equity is 6% pa.
• The required return and growth rate are given as effective annual rates.
• Dividend payment dates and ex-dividend dates are at the same time.
• Remember that there are 4 quarters in a year and 3 months in a quarter.
What is the current stock price?
Project Data Project life 2 yrs Initial investment in equipment $600k Depreciation of equipment per year$250k Expected sale price of equipment at end of project $200k Revenue per job$12k Variable cost per job $4k Quantity of jobs per year 120 Fixed costs per year, paid at the end of each year$100k Interest expense in first year (at t=1) $16.091k Interest expense in second year (at t=2)$9.711k Tax rate 30% Government treasury bond yield 5% Bank loan debt yield 6% Levered cost of equity 12.5% Market portfolio return 10% Beta of assets 1.24 Beta of levered equity 1.5 Firm's and project's debt-to-equity ratio 25%
Notes
1. The project will require an immediate purchase of \$50k of inventory, which will all be sold at cost when the project ends. Current liabilities are negligible so they can be ignored.
Assumptions
• The debt-to-equity ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio. Note that interest expense is different in each year.
• Thousands are represented by 'k' (kilo).
• All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year.
• All rates and cash flows are nominal. The inflation rate is 2% pa.
• All rates are given as effective annual rates.
• The 50% capital gains tax discount is not available since the project is undertaken by a firm, not an individual.
What is the net present value (NPV) of the project?
The perpetuity with growth formula is:
$$P_0= \dfrac{C_1}{r-g}$$
Which of the following is NOT equal to the total required return (r)?
You buy a house funded using a home loan. Have you or debt?
Convert a 10% effective annual rate $(r_\text{eff annual})$ into a continuously compounded annual rate $(r_\text{cc annual})$. The equivalent continuously compounded annual rate is:
Which of the following quantities is commonly assumed to be normally distributed?
Which of the following is NOT the Australian central bank’s responsibility? | 2020-08-03 11:34:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24506144225597382, "perplexity": 2757.434330678247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735810.18/warc/CC-MAIN-20200803111838-20200803141838-00369.warc.gz"} |
https://www.quantumstudy.com/a-large-insulates-sphere-of-radius-r-charged-with-q-units-of-electricity-is-placed-in-contact-with-a-small-insulated-uncharged-sphere-of-radius-r%EF%82%A2-and-is-then-separated-the-charge-on-the/ | # A large insulates sphere of radius r charged with Q units of electricity is placed in contact with a small insulated…..
A large insulates sphere of radius r charged with Q units of electricity is placed in contact with a small insulated uncharged sphere of radius r’ and is then separated. The charge on the smaller sphere will now be
(a) (Q(r’+ r))/r’
(b) (Q(r’+ r))/r
(c) Qr/(r’+ r)
(d) Qr’/(r’ + r)
Ans: (d) | 2021-09-16 16:28:58 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8189169764518738, "perplexity": 3178.6221127070335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053657.29/warc/CC-MAIN-20210916145123-20210916175123-00074.warc.gz"} |
https://documen.tv/question/write-an-eponential-function-to-model-the-following-situation-a-population-starting-at-740-anima-22221666-7/ | # Write an exponential function to model the following situation: A population starting at 740 animals decreases at an annual rate of 12
Question
Write an exponential function to model the following situation: A population starting at 740 animals
decreases at an annual rate of 12%. Use your function to determine about how many animals there will
be after 5 years.
A. 390
B. 150
CC. 680
D. 340
in progress 0
1 year 2021-09-02T02:19:12+00:00 1 Answers 6 views 0
An exponential function can be written in the form $$f(n)=a\times b^n$$, where $$a$$ is your starting population, $$b$$ is the factor by which the population increases, and $$n$$ is the number of terms. In this case, $$a=740$$, $$b=0.88$$ because the population decreases by 12 percent (1-0.12=0.88), and $$n=5$$.
Thus, $$f(5)=740 \times 0.88^5$$, which is roughly equal to 390. | 2023-02-06 19:51:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7446261048316956, "perplexity": 429.43491708966206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500357.3/warc/CC-MAIN-20230206181343-20230206211343-00329.warc.gz"} |
https://www.physicsforums.com/threads/figuring-out-what-image-distance-lens-and-mirror-formula-to-use.625727/ | # Figuring out what image distance (lens and mirror) formula to use?
## Main Question or Discussion Point
I was practicing hw and I came across these two derived equations from 1/f=1/d0+1/di
1st one 1/di=1/d0-1/f
2nd one di=d0(f)/d0-f
How do distinguish which one to use? and how did they get that derivation for the 2nd equation?
Related Other Physics Topics News on Phys.org
Simon Bridge
Homework Helper
I wouldn't worry about memorizing the different forms of the equations to use for which situation, focus on understanding the physics.
Note - the second one is just what happens to the lensmakers formula if you solve it for di. The first one is incorrect.
jtbell
Mentor
I was practicing hw and I came across these two derived equations from 1/f=1/d0+1/di
1st one 1/di=1/d0-1/f
Are you sure the right-hand side wasn't reversed, that is,
$$\frac{1}{d_i} = \frac{1}{f} - \frac{1}{d_o}$$
2nd one di=d0(f)/d0-f
This is just a re-arrangement of your first equation. It's good algebra practice. First, get the 1/di all by itself on the left as in my equation above. Can you see where to go from there? (hint: how do you add or subtract fractions?)
Oh yeah sorry oops I was just wondering about the 2nd one.
Simon Bridge | 2020-08-07 04:50:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6648419499397278, "perplexity": 1195.7206091842509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737152.0/warc/CC-MAIN-20200807025719-20200807055719-00363.warc.gz"} |
https://www.isid.ac.in/~statmath/index.php?module=Preprint&Action=ViewAbs&PreprintId=18 | # Publications and Preprints
Zakai equation of nonlinear filtering with Ornstein-Uhlenbeck noise: Existence and Uniqueness
by
Abhay Bhatt, Balram Rajput and Jie Xiong
We consider a filtering model where the noise is an Ornstein-Uhlenbeck process independent of the signal $X$. The signal is assumed to be a Markov difusion process. We derive the (analogue of) Zakai equation in this setup. It is a system of two measure valued equations satisfied by the unnormalised conditional distribution. We also prove uniqueness of solution to these equations.
isid/ms/2002/18 [fulltext] | 2020-01-26 18:19:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8322349786758423, "perplexity": 1487.3881474618306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690095.81/warc/CC-MAIN-20200126165718-20200126195718-00322.warc.gz"} |
https://www.physicsforums.com/threads/chebshev-polynomial-approximation.536690/ | # Chebshev polynomial approximation
1. Oct 4, 2011
### sbashrawi
1. The problem statement, all variables and given/known data
Hi every body
I am triyng to find a polynolial approximation to the function: f(x)= (x+2)ln(x+2)
using the chebyshev polynomials,
the idea is to use matlab to find the coeefficients of the approximation poly.
using the comand double(int(...))
but this command doesn't give me any numerical value
Waht I got was:
>> int((x+2)*log(x+2)*(1-x^2)^-0.5,-1,1)
Warning: Explicit integral could not be found.
ans =
int((log(x + 2)*(x + 2))/(1 - x^2)^(1/2), x = -1..1)
>>
and if I use double(int(...)) an error message shows up
Any help pls
2. Relevant equations
3. The attempt at a solution
2. Oct 5, 2011
### TheoMcCloskey
Consider the substitution $x=\cos(\theta)$ and note that $T_k(x) = \cos(k \,\theta)$. This should remove the singularity. | 2018-02-18 09:09:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6895135641098022, "perplexity": 7108.883824158735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811795.10/warc/CC-MAIN-20180218081112-20180218101112-00127.warc.gz"} |
http://openstudy.com/updates/4ddbf4efa7af8b0b834e7c7a | ## anonymous 5 years ago The number of bacteria in a certain population increases 4.5% according to an exponential growth model, with a growth rate of per hour. How many hours does it take for the size of the sample to double? Do not round any intermediate computations, and round your answer to the nearest hundredth.
you need to solve $e^{.045t}=2$ for t. takes two steps $.045t=ln(2)$ $t=\frac{ln(2)}{.045}$ then use a calculator to get $t=15.403$ | 2016-10-27 11:39:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3921898603439331, "perplexity": 222.5373081188679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721268.95/warc/CC-MAIN-20161020183841-00393-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://datawiz.io/en/blog/ebitda-how-to-calculate | #StandWithUkraine
Click to support
Cases
Dec. 13, 2022
EBITDA: How Can You Calculate It and What Does It Inform You Abou?
#### Alla
PhD, Financial consultant at Datawiz
# EBITDA: calculation method and value in retail
EBITDA is one of the most popular indicators of a company's success. It is ambiguous and controversial, but it is widely used among analysts. Let's try to figure out how it is calculated, what it informs chain owners, and how it can be used for managers.
## What is EBITDA?
EBITDA is a kind of profit, and its components are revealed by the name - abbreviation:
• E - Earnings - profit
• B - Before ...
• I - Interest
• T - Taxes
• D - Depreciation
• A - Amortization
To better understand the methodology for calculating the indicator, it is necessary to consider all its components.
Interest is the difference between interest expenses and interest income related to the payment of interest on issued/received loans or deposit accounts. For example, if the chain uses a bank loan, then the interest paid is an interest expense. And the received interest on a bank deposit is interest income. The difference between these amounts will form the Interest in the EBITDA formula.
Note! Often there is only interest expense in the chain.
Taxes include only one tax - income tax.
Depreciation and Amortization - part of the cost of assets (equipment, equipment, licenses, patents), which is written off and included in chain expenses.
For example, a chain bought $20K worth of shop equipment with a 10-year shelf life. The company will write off$ 2K of the cost of equipment annually as depreciation.
## How to calculate EBITDA?
It should be noted that there are different approaches to calculating EBITDA, but the most common are:
• EBITDA = Net income + Interest (interest expense - interest income) + Tax + Depreciation + Amortization
• EBITDA = Sales revenue - Cost of sales - Operating expenses
What should be the proper value of EBITDA? For each company, the desired value of this indicator is different and depends on many factors. In general, the higher the EBITDA, the more reliable the company is and more able to service its debts on its own.
You can also follow the dynamics of the indicator: if it grows - it is good, if it decreases - it is bad.
A negative value indicates a significant unprofitability of the company and a high probability of its bankruptcy.
Example 1. The chain net income is $120K, income tax paid is$ 2.6K, interest on the loan is $3K, and depreciation is$ 4.4K.
EBITDA = 120 + 2.6 + 3 + 4.4 = $130K Example 2. During the year, the company received the following results of work: • Sales value -$ 3,500K
• Cost of sales - $2,100K • Marketing expenses (sales expenses) –$ 50K
• including Depreciation and Amortization – $10K • Management expenses –$ 120K
EBITDA = 3500 - 2100 - (50 - 10) - 120 = \$ 1240K
## Benefits of EBITDA
For the first time, EBITDA was used in the 1980s by investors to assess the company's creditworthiness, that is, to determine whether the company has enough funds to pay all interest on its loans.
Now, with the help of this indicator, it is determined whether the company can pay capital costs (acquisition and maintenance of equipment, transport, buildings, licenses, etc.).
Key EBITDA Benefits:
• Can be used for different tax systems and when applying different income tax rates.
• Allows you to compare companies operating in various sectors of the economy.
• Widely used in financial analysis.
• Allows you to estimate the amount of debt that the company can cover.
• Quickly evaluates chain performance.
The main disadvantages of the indicator:
• Does not take into account the inventory turnover ratio.
• It does not take into account the large burden of debts and the increased percentage of depreciation of non-current assets.
• Depends on the chosen calculation method.
• Does not contain information about sources of income.
However, despite these cons, the assessment and study of EBITDA will provide additional information about the state of affairs in the company and, if necessary, change the chosen strategy.
To ensure that analytical calculations do not cause difficulties and give the maximum result, the Datawiz team, together with experts in retail, has prepared convenient and useful reports. With BES platform solutions, you can easily assess the situation in the chain and get insights to improve it. | 2023-03-23 15:24:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24583624303340912, "perplexity": 1895.060526106692}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945168.36/warc/CC-MAIN-20230323132026-20230323162026-00169.warc.gz"} |
https://brilliant.org/problems/do-i-need-a-scale/ | # Do I Need A Scale?
Geometry Level 1
In the above figure $$SR = 6$$, the radius of larger circle is $$5$$ and that of the smaller circle is $$3$$. Find the length of $$PQ$$.
× | 2017-10-17 17:15:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8532429933547974, "perplexity": 373.18502615427025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822145.14/warc/CC-MAIN-20171017163022-20171017183022-00343.warc.gz"} |
https://xiaozhuanlan.com/topic/9036748251 | 730. Count Different Palindromic Subsequences
# 730. Count Different Palindromic Subsequences
## 刷题内容
Given a string S, find the number of different non-empty palindromic subsequences in S, and return that number modulo 10^9 + 7.
A subsequence of a string S is obtained by deleting 0 or more characters from S.
A sequence is palindromic if it is equal to the sequence reversed.
Two sequences A_1, A_2, ... and B_1, B_2, ... are different if there is some i for which A_i != B_i.
Example 1:
Input:
S = 'bccb'
Output: 6
Explanation:
The 6 different non-empty palindromic subsequences are 'b', 'c', 'bb', 'cc', 'bcb', 'bccb'.
Note that 'bcb' is counted only once, even though it occurs twice.
Example 2:
Input:
Each character S[i] will be in the set {'a', 'b', 'c', 'd'}. | 2019-08-18 11:13:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27692320942878723, "perplexity": 4511.133287875221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313803.9/warc/CC-MAIN-20190818104019-20190818130019-00197.warc.gz"} |
http://math.stackexchange.com/questions/100991/existence-of-total-derivative-of-a-function | # Existence of total derivative of a function
Given the function $f(x,y) = \sqrt[3]{y}\cdot \arctan(x)$ discuss the existence and continuity of it's partial derivatives and existence of it's total derivative.
Since the partial derivative $\frac{\partial f}{\partial y} = \frac{\arctan(x)}{3\sqrt[3]{y^2}}$ has discontinuity at $y=0$, I tried to compute the partial derivative at $(x,0)$ using the limit, which gives: $$\lim_{t\to0} \frac{f(x,t) - f(x,0)}{t} = \lim_{t\to 0} \frac{\sqrt[3]{t}}{t}\arctan(x) = +\infty.$$
Does this mean that the partial derivative doesn't exist at $(x,0)$? So there's no total derivative at $(x,0)$ and it can be said that the function is not differentiable at $\mathbb{R}^2$ ?
- | 2014-04-24 22:22:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9163959622383118, "perplexity": 83.87203340529966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://en.wikisource.org/wiki/Popular_Science_Monthly/Volume_9/July_1876/The_Mechanical_Action_of_Light | Popular Science Monthly/Volume 9/July 1876/The Mechanical Action of Light
THE
POPULAR SCIENCE
MONTHLY.
JULY, 1876.
THE MECHANICAL ACTION OF LIGHT.[1]
By WILLIAM CROOKES, F. R. S.
TO generate motion has been found a characteristic common, with one exception, to all the phases of physical force. We hold the bulb of a thermometer in our hands, and the mercury expands in bulk, and, rising along the scale, indicates the increase of heat it has received. We heat water, and it is converted into steam, and moves our machinery, our carriages, and our iron-clads. We bring a load-stone near a number of iron-filings, and they move toward it, arranging themselves in peculiar and intricate lines; or we bring a piece of iron near a magnetic needle, and we find it turned away from its ordinary position. We rub a piece of glass with silk, thus throwing it into a state of electrical excitement, and we find that bits of paper or thread fly toward it, and are, in a few moments, repelled again. If we remove the supports from a mass of matter it falls, the influence of gravitation being here most plainly expressed in motion, as shown in clocks and water-mills. If we fix pieces of paper upon a stretched string, and then sound a musical note near it, we find certain of the papers projected from their places. Latterly the so-called "sensitive flames," which are violently agitated by certain musical notes, have become well known as instances of the conversion of sound into motion. How readily chemical force undergoes the same transformation is manifested in such catastrophes as those of Bremerhaven, in the recent deplorable coal-mine explosions, and indeed in every discharge of a gun.
But light, in some respects the highest of the powers of Nature, has not been hitherto found capable of direct conversion into motion, and such an exception cannot but be regarded as a singular anomaly.
This anomaly the researches which I am about to bring before you have now removed; and, like the other forms of force, light is found to be capable of direct conversion into motion, and of being—like heat, electricity, magnetism, sound, gravitation, and chemical action—most delicately and accurately measured by the amount of motion thus produced.
My research arose from the study of an anomaly.
It is well known to scientific men that bodies appear to weigh less when they are hot than when they are cold; the explanation given being that the ascending currents of hot air buoy up the body, so to speak. Wishing to get rid of this and other interfering actions of the air during a research on the atomic weight of thallium, I had a balance constructed in which I could weigh in a vacuum. I still, indeed, found my apparatus less heavy when hot than when cold. The obvious explanations were evidently not the true ones; obvious explanations seldom are true ones, for simplicity is not a characteristic of Nature.
An unknown disturbing cause was interfering, and the endeavor to find the clew to the apparent anomaly has led to the discovery of the mechanical action of light.
I was long troubled by the apparent lawlessness of the actions I obtained. By gradually increasing the delicacy of my apparatus I could easily get certain results of motion when hot bodies were brought near them, but sometimes it was one of attraction, at others of repulsion, while occasionally no movement whatever was produced.
I will try to reproduce these phenomena in this apparatus (Fig. 1). Here are two glass bulbs, each containing a bar of pith about three inches long and half an inch thick, suspended horizontally by a long fibre of cocoon silk. I bring a hot glass rod, or a candle, toward one of them, and you see that the pith is gradually attracted, following the candle as I move it round the bulb. That seems a very definite fact; but look at the action in the other bulb. I bring the candle, or a hot glass rod, near the other bar of pith, and it is strongly repelled by it, much more strongly than it was attracted in the first instance.
Here, again, is a third fact. I bring a piece of ice near the pith-bar which has just been repelled by the hot rod, and it is attracted, and follows the rod round as a magnetic needle follows a piece of iron.
The repulsion by radiation is the key-note of these researches. The movement of a small bar of pith is not very distinct, except to those near, and I wish to make this repulsion evident to all. I have therefore arranged a piece of apparatus by which it can be seen by all present. I will, by means of the electric light, project an image of a pendulum suspended in vacuo on the screen. You see that the approach of a candle gives the bob a veritable push, and, by alternately obscuring and uncovering the light, I can make the pendulum beat time to my movements.
What, then, is the cause of the contradictory action in these two bulbs—attraction in one, and repulsion in the other? It can be explained in a few words. Attraction takes place when air is present, and repulsion when air is absent.
Neutrality, or no movement, is produced when the vacuum is insufficient. A minute trace of air in the apparatus interferes most materially with the repulsion, and for a long time I was unaware of the powerful action produced by radiation in a "perfect" vacuum.
It is not at first sight obvious how ice or a cold body can produce the opposite effect to heat. The law of exchanges, however, explains this
Fig. 1.Fig. 2.
perfectly. The pith-bar and the whole of the surrounding bodies are incessantly exchanging heat-rays; and under ordinary circumstances the income and expenditure of heat are in equilibrium. Let me draw your attention to the diagram (Fig. 2), illustrating what takes place when I bring a piece of ice near the apparatus. The centre circle represents my piece of pith; the arrows show the influx and efflux of heat. A piece of ice brought near cuts off the influx of heat from one side, and therefore allows an excess of heat to fall on the pith from the opposite side. Attraction by a cold body is therefore seen to be only repulsion by the radiation from the opposite side of the room.
The later developments of this, research have demanded the utmost refinement of apparatus. Everything has to be conducted in glass vessels, and these must be blown together till they make one piece, for none but fused joints are admissible. In an investigation depending for its successful prosecution on manipulative dexterity, I have been fortunate in having the assistance of my friend Mr. Charles Gimingham. All the apparatus you see before you are the fruits of his skillful manipulation, and I now want to draw your attention to what I think is a masterpiece of glass-working—the pump which enables me so readily to produce a vacuum unattainable by ordinary means.
The pump here at work is a modification of the Sprengel pump, but it contains two or three valuable improvements. I cannot attempt to describe the whole of the arrangements, but I will rapidly run over them as illuminated by the electric light. It has a triple-fall tube in which the mercury is carried down, thus exhausting with threefold rapidity; it has Dr. McLeod's beautiful arrangement for measuring the residual gas; it has gauges in all directions, and a small radiometer attached to it to tell the amount of exhaustion that I get in any experiments; it has a contrivance for admitting oil of vitriol into the tubes without interfering with the progress of the exhaustion, and it is provided with a whole series of most ingenious vacuum-taps devised by Mr. Gimingham. The exhaustion produced in this pump is such that a current of electricity from an induction-coil will not pass across the vacuum. This pump is now exhausting a torsion-balance, which will be described presently. Another pump, of a similar kind but less complicated, is exhausting an apparatus which has enabled me to pass from the mere exhibition of the phenomena to the obtaining of quantitative measurements.
A certain amount of force is exerted when a ray of light or heat falls on the suspended pith, and I wished to ascertain—
1. What were the actual rays—invisible heat, luminous, or ultraviolet—which caused this action?
2. What influence had the color of the surface on the action?
3. Was the amount of action in direct proportion to the amount of radiation?
4. What was the amount of force exerted by radiation?
I required an apparatus which would be easily moved by the impact of light on it, but which would readily return to zero, so that measurements might be obtained of the force exerted when different amounts of light acted on it. At first I made an apparatus on the principle of Zöllner's horizontal pendulum. For a reason that will be explained presently, I am unable to show you the apparatus at work, but the principle of it is shown in the diagram (Fig. 3). The pendulum represented by this horizontal line has a weight at the end. It is supported on two fibres of glass, one stretched upward and the other stretched downward, both firmly fastened at the ends, and also attached to the horizontal rod (as shown in the figure) at points near together, but not quite opposite to one another.
It is evident that if there is a certain amount of pull upon each of these fibres, and that the pull can be so adjusted as to counteract the weight at the end and keep it horizontal, the nearer the beam approaches the horizontal line the slower its rate of oscillation. If I relax the tension, by throwing the horizontal beam downward, I get a more rapid oscillation sideways. If I turn the leveling-screw so as to raise the beam and weight, the nearer it approaches the horizontal position the slower the oscillation becomes, and the more delicate is the instrument. Here is the actual apparatus that I tried to work with. The weight at the end is a piece of pith; in the centre is a glass mirror, on which to throw a ray of light, so as to enable me to see the movements by a luminous index. The instrument, inclosed in glass and exhausted of air, was mounted on a stand with leveling-screws, and with it I tried the action of a ray of light falling on the pith. I found that I could get any amount of sensitiveness that I liked; but it was not only sensitive to the impact of a ray of light, it was immeasurably more so to a change of horizontality. It was, in fact, too delicate for me to work with. The slightest elevation of one end of the instrument altered the sensitiveness, or the position of the
Fig. 3.Fig. 4.
zero-point, to such a degree that it was impossible to try any experiments with it in such a place as London. A person stepping from one room to another altered the position of the centre of gravity of the house. If I walked from one side of my own laboratory to the other, I tilted the house over sufficiently to upset the equilibrium of the apparatus. Children playing in the street disturbed it. Prof. Rood, who has worked with an apparatus of this kind in America, finds that an elevation of its side equal to 136000000 part of an inch is sufficient to be shown on the instrument. It was therefore out of the question to use an instrument of this construction, so I tried another form (shown in Fig. 4), in which a fine glass beam, having disks of pith at each end, is suspended horizontally by a line glass fibre, the whole being sealed up in glass and perfectly exhausted. To the centre of oscillation a glass mirror is attached.
Now, a glass fibre has the property of always coming back to zero when it is twisted out of its position. It is almost, if not quite, a perfectly elastic body. I will show this by a simple experiment. This is a long glass fibre hanging vertically, and having an horizontal bar suspended on it. I hold the bar, and turn it half round; it swings backward and forward for a few times, but it quickly comes back to its original position. However much twist, however much torsion, may be put on this, it always returns ultimately to the same position. I have twisted glass fibres round and kept them in a permanent state of twist more than a hundred complete revolutions, and they always came back accurately to zero. The principle of an instrument that I shall describe farther on depends entirely on this property of glass.
Instead of using silk to suspend the torsion-beam with, I employ a fibre of glass, drawn out very fine before the blow-pipe. A thread of glass of less than the thousandth of an inch in thickness is wonderfully strong, of great stiffness, and of perfect elasticity, so that, however much it is twisted round short of the breaking-point, it untwists itself perfectly when liberated. The advantage of using glass fibres for suspending my beam is, therefore, that it always returns accurately to zero after having tried an experiment, while I can get any desired amount of sensitiveness by drawing out the glass fibre sufficiently fine.
Here, then, is the torsion apparatus sealed on to a Sprengel pump. You will easily understand the construction by reference to the diagram (Fig. 4). It consists of an horizontal beam suspended by a glass fibre, and having disks of pith at each end coated with lampblack. The whole is inclosed in a glass case, made of tubes blown together, and by means of the pump the air is entirely removed. In the centre of the horizontal beam is a silvered mirror, and a ray from the electric light is reflected from it on to a scale in front, where it is visible as a small circular spot of light. It is evident that an angular movement of the torsion-beam will cause the spot of light to move to the right or to the left along the scale. I will first show you the wonderful sensitiveness of the apparatus. I simply place my finger near the pith-disk at one end, and the warmth is quite sufficient to drive the spot of light several inches along the scale. It has now returned to zero, and I place a candle near it. The spot of light flies off the scale. I now bring the candle near it alternately from one side to the other, and you see how perfectly it obeys the force of the candle. I think the movement is almost better seen without the screen than with it. The fog, which has been so great a detriment to every one else, is rather in my favor, for it shows the luminous index like a solid bar of light swaying to and fro across the room. The warmth of my finger, or the radiation from a candle, is therefore seen to drive the pith-disk away. Here is a lump of ice, and on bringing it near one of the disks the luminous index promptly shows a movement of apparent attraction.
With this apparatus I have tried many experiments, and among others I endeavored to answer the question, "Is it light, or is it heat, that produces the movement?"—for that is a question that is asked me by almost every one; and a good many appear to think that, if the motion can be explained by an action of heat, all the novelty and the importance of the discovery vanish. Now, this question of light or heat is one I cannot answer, and I think that when I have explained the reason you will agree with me that it is unanswerable. There is no physical difference between light and heat. Here is a diagram of the visible spectrum (Fig. 5). The spectrum, as scientific
Fig. 5.
men understand it, extends from an indefinite distance beyond the red to an indefinite distance beyond the violet. We do not know how far it would extend one way or the other if no absorbing media were present; but, by what we may call a physiological accident, the human eye is sensitive to a portion of the spectrum situated between the line A in the red to about the line H in the violet. But this is not a physical difference between the luminous and non-luminous parts of the spectrum; it is only a physiological difference. Now, the part at the red end of the spectrum possesses, in the greatest degree, the property of causing the sensation of warmth, and of dilating the mercury in a thermometer, and of doing other things which are conveniently classed among the effects of heat; the centre part affects the eye, and is therefore called light; while the part at the other end of the spectrum has the greatest energy in producing chemical action. But it must not be forgotten that any ray of the spectrum, from whatever part it is selected, will produce all these physical actions in more or less degree. A ray here, at the letter C for instance in the orange, if concentrated on the bulb of a thermometer, will cause the mercury to dilate, and thus show the presence of heat; if concentrated on my hand I feel warmth; if I throw it on the face of a thermo-pile it will produce a current of electricity; if I throw it upon a sensitive photographic plate it will produce chemical action; and if I throw it upon the instrument I have just described it will produce motion. What, then, am I to call that ray? Is it light, heat, electricity, chemical action, or motion? It is neither. All these actions are inseparable attributes of the ray of that particular wave-length, and are not evidences of separate identities. I can no more split that ray up into five or six different rays, each having different properties, than I can split up the element iron, for instance, into other elements, one possessing the specific gravity of iron, another its magnetic properties, a third its chemical properties, a fourth its conducting power for heat, and so on. A ray of light of a definite refrangibility is one and indivisible, just as an element is, and these different properties of the ray are mere functions of that refrangibility, and inseparable from it. Therefore when I tell you that a ray in the ultra-red pushes the instrument with a force of one hundred, and a ray in the most luminous part has a dynamic value of about half that, it must be understood that the latter action is not due to heat-rays which accompany the luminous rays, but that the action is one purely due to the wave-length and the refrangibility of the ray employed. You now understand why it is that I cannot give a definite answer to the question, "Is it heat or is it light that produces these movements?" There is no physical difference between heat and light; so, to avoid confusion, I call the total bundle of rays which come from a candle or the sun, radiation.
I found, by throwing the pure rays of the spectrum one after the other upon this apparatus, that I could obtain a very definite answer to my first question, "What are the actual rays which cause this action?"
The apparatus was fitted up in a room specially devoted to it, and was protected on all sides, except where the rays of light had to pass, with cotton-wool and large bottles of water. A heliostat reflected a beam of sunlight in a constant direction, and it was received on an appropriate arrangement of slit, lenses, prisms, etc., for projecting a pure spectrum. Results were obtained in the months of July, August, and September; and they are given in the figure (Fig. 5) graphically as a curve, the maximum being in the ultra-red and the minimum in the ultra-violet. Taking the maximum at 100, the following are the mechanical values of the different colors of the spectrum:
Ultra-red 100 Extreme red 85 Red 73 Orange 66 Yellow 57 Green 41 Blue 22 Indigo 8 12 Violet 6 Ultra-violet 5
A comparison of these figures is a sufficient proof that the mechanical action of radiation is as much a function of the luminous rays as it is of the dark heat-rays. The second question—namely, "What influence has the color of the surface on the action?" has also been solved by this apparatus.
In order to obtain comparative results between disks of pith coated with lampblack and with other substances, another torsion apparatus was constructed, in which six disks in vacuo could be exposed one after the other to a standard light. One disk always being lamp-blacked pith, the other disks could be changed so as to get comparisons of action. Calling the action of radiation from a candle on the lampblacked disk 100, the following are the proportions obtained:
Lampblacked pith 100 Iodide of palladium 87 .3 Precipitated silver 56 Amorphous phosphorus 40 Sulphate of baryta 37 Milk of sulphur 31 Red oxide of iron 28 Scarlet iodide of mercury and copper 22 Lampblacked silver 18 White pith 18 Carbonate of lead 13 Rock-salt 6 .5 Glass 6 .5
This table gives important information on many points: one more especially—the action of radiation. on lampblacked pith is five and a half times what it is on plain pith. A bar like those used in. my first experiment, having one-half black and one-half white, exposed to a broad beam of radiation, will be pushed with five and a half times more strength on the black than on the white half, and if freely suspended will set at an angle greater or less according to the intensity of the radiation falling on it.
This suggests the employment of such a bar as a photometer, and I have accordingly made an instrument on this principle; its construction is shown in the diagram (Fig. 6). It consists of a flat bar of pith, A, half black and half white, suspended horizontally in a bulb by means of a long silk fibre. A reflecting mirror, B, and small magnet, C, are fastened to the pith, and a controlling magnet, D, is fastened outside so that it can slide up and down the tube, and thus increase or diminish sensitiveness. The whole is completely exhausted and then inclosed in a box lined with black velvet, with apertures for the days of light to pass in and out. A ray of light from a lamp, F, reflected from the mirror, B, to a graduated scale, G, shows the movements of the pith-bar.
The instrument fitted up for a photometric experiment is in front of me on the table. A beam from the electric light falls on the little mirror, and is thence reflected back to the screen, where it forms a spot of light, the displacement of which to the right or the left shows the movement of the pith-bar. One end of the bar is blacked on each side, the other end being left plain. I have two candles, E E, each twelve inches off the pith-bar, one on each side of it. When I remove the screens, H H, the candle on one side will give the pith a
Fig. 6.
push in one direction, and the candle on the other side will give the pith a push in the opposite direction, and as they are the same distance off they will neutralize each other, and the spot of light will not move. I now take the two screens away: each candle is pushing the pith equally in opposite directions, and the luminous index remains at zero. When, however, I cut one candle off, the candle on the opposite side exerts its full influence, and the index flies to one end of the scale. I cut the other one off and obscure the first, and the spot of light flies to the other side. I obscure them both, and the index comes quickly to zero. I remove the screens simultaneously, and the index does not move.
I will retain one candle 12 inches off, and put two candles on the other side 17 inches off. On removing the screens you see the index does not move from zero. Now the square of 12 is 144, and the square of 17 is 289. Twice 144 is 288. The light of these candles, therefore, is as 288 to 289. They therefore balance each other as nearly as possible. Similarly I can balance a gaslight against a candle. I have a small gas-burner here, which I place 28 inches off on one side, and you see it balances the candle 12 inches off. These experiments show how conveniently and accurately this instrument can be used as a photometer. By balancing a standard candle on one side against any source of light on the other, the value of the latter in terms of a candle is readily shown; thus in the last experiment the standard candle 12 inches off is balanced by a gas-flame 28 inches off. The lights are, therefore, in the proportion of 12² to 28², or as 1 to 5.4. The gas-burner is, therefore, equal to about five and a half candles.
In practical work on photometry it is often required to ascertain the value of gas. Gas is spoken of commercially as of so many candle-power. There is a certain "standard" candle which is supposed to be made invariable by act of Parliament. I have worked a great deal with these standard candles, and I find them to be among the most variable things in the world. They never burn with the same luminosity from one hour to the other, and no two candles are alike. I can now, however, easily get over this difficulty. I place a "standard" candle at such a distance from the apparatus that it gives a deflection of 100° on the scale. If it is poorer than the standard, I bring it nearer; if better, I put it farther off. Indeed, any candle may be taken; and if it be placed at such a distance from the apparatus that it will give a uniform deflection, say, of 100 divisions, the standard can be reproduced at any subsequent time; and the burning of the candle may be tested during the photometric experiments by taking the deflection it causes from time to time, and altering its distance, if needed, to keep the deflection at 100 divisions. The gaslight to be tested is placed at such a distance on the opposite side of the pith-bar that it exactly balances the candle. Then, by squaring the distances, I get the exact proportion between the gas and the candle.
Before this instrument can be used as a photometer or light-measurer, means must be taken to cut off from it all those rays coming from the candle or gas which are not actually luminous. A reference to the spectrum diagram (Fig. 5) will show that at each end of the colored rays there is a large space inactive, as far as the eye is concerned, but active in respect to the production of motion—strongly so at the red end, less strong at the violet end. Before the instrument can be used to measure luminosity, these rays must be cut off. We buy gas for the light that it gives, not for the heat that it evolves on burning, and it would therefore never do to measure the heat and pay for it as light.
It has been found that a clear plate of alum, while letting all the light through, is almost if not quite opaque to the heating rays below the red. A solution of alum in water is almost as effective as a crystal of alum; if, therefore, I place in front of the instrument glass cells containing an aqueous solution of alum, the dark heat-rays are filtered off.
But the ultra-violet rays still pass through, and to cut these off I dissolve in the alum-solution a quantity of sulphate of quinine. This body has the property of cutting off the ultra-violet rays from a point between the lines G and H. A combination of alum and sulphate of quinine, therefore, limits the action to those rays which affect the human eye, and the instrument, such as you see it before you, becomes a true photometer.
This instrument, when its sensitiveness is not deadened by the powerful control magnet I am obliged to keep near it for these experiments, is wonderfully sensible to light. In my own laboratory, a candle thirty-six feet off produces a decided movement, and the motion of the index increases inversely with the square of the distance, thus answering the third question, "Is the amount of action in direct proportion to the amount of radiation?"
The experimental observations and the numbers which are required by the theoretical diminution of light with the square of the distance are sufficiently close, as the following figures show:
Candle 6 feet off gives a deflection of 218 .0° " 12 "" 54 .0° " 18 "" 24 .5° " 24 "" 13 .0° " 10 "" 77 .0° " 20 "" 19 .0° " 30 "" 8 .5°
The effect of two candles side by side is practically double, and of three candles three times that of one candle.
In the instrument just described, the candle acts on a pith-bar, one end of which is blacked on each side. But suppose I black the bar on alternate halves, and place a light near it sufficiently strong to drive the bar half round. The light will now have presented to it another black surface in the same position as the first, and the bar will be again driven in the same direction half round. This action will be again repeated, the differential action of the light on the black and white surfaces keeps the bar moving, and the result will be rotation.
Here is such a pith-bar, blacked on alternate sides, and suspended in an exhausted glass bulb (Fig. 7). I project its image on Fig, 7.Fig. 8. the screen, and the strong light which shines on it sets it rotating with considerable velocity. Now it is slackening speed, and now it has stopped altogether. The bar is supported on a fibre of silk, which has twisted round till the rotation is stopped by the accumulated torsion. I put a water-screen between the bar and the electric light to cut off some of the active rays, and the silk untwists, turning the bar in the opposite direction. I now remove the water, and the bar revolves rapidly as at first.
From suspending the pith on a silk fibre to balancing it on a point the transition is slight; the interfering action of torsion is thereby removed, and the instrument rotates continuously under the influence of radiation. Many of these little pieces of apparatus, to which I have given the name of radiometers, are on the table, revolving with more or less speed. The diagram (Fig. 8) shows their construction, which is very simple. They have formed of four arms of very fine glass, supported in the centre by a needle-point, and having at the extremities thin disks of pith lampblacked on one side, the black surfaces all facing the same way. The needle stands in a glass cup, and the arms and disks are delicately balanced so as to revolve with the slightest impetus.
Here are some rotating by the light of a candle. This one is now rather an historical instrument, being the first one in which I saw rotation. It goes very slowly in comparison with the others, but it is not bad for the first instrument of the sort that was ever made.
I will now, by means of a vertical lantern, throw on the screen the projection of one of these instruments, so as to show the movement rather better than you could see it on the table. The electric light falling vertically downward on it, and much of the power being cut off by water and alum screens, the rotation is slow. I bring a candle near and the speed increases. I now lift the radiometer up, and place it full in the electric light, projecting its image direct on the screen, and it goes so rapidly that if I had not cut out the four pieces of pith of different shapes you would have been unable to follow the movement.
The speed with which a sensitive radiometer will revolve in the sun is almost incredible; and the electric light, such as I have it in this lantern, cannot be far short of full sunshine. Here is the most sensitive instrument I have yet made, and I project its image on the screen, letting the full blaze of the electric light shine upon it. Nothing is seen but an undefined nebulous ring, which becomes at times almost invisible. The number of revolutions per second cannot be counted, but they must be several hundreds, for one candle has made it spin round forty times a second.
I have called the instrument the radiometer because it will enable me to measure the intensity of radiation falling on it by counting the revolutions in a given time; the law being that the rapidity of revolution is inversely as the square of the distance between the light and the instrument.
When exposed to different numbers of candles at the same distance off, the speed of revolution in a given time is in proportion to the number of candles; two candles giving twice the rapidity of one candle, and three, three times, etc.
The position of the light in the horizontal plane of the instrument is of no consequence, provided the distance is not altered; thus two candles, one foot off, give the same number of revolutions per second, whether they are side by side or opposite to each other. From this it follows that if the radiometer is brought into a uniformly lighted space it will continue to revolve.
It is easy to get rotation in a radiometer without having the surfaces of the disks differently colored. Here is one having the pith-disks blacked on both sides. I project its image on the screen, and there is no movement. I bring a candle near it, and shade the light from one side, when rapid rotation is produced, which is at once altered in direction by moving the shade to the other side.
I have arranged here a radiometer so that it can be made to move by a very faint light, and at the same time its rotation is easily followed by all present. In this bulb is a large six-armed radiometer carrying a mirror in its centre. The mirror is almost horizontal, but not quite so, and therefore, when I throw a beam of electric light vertically downward on to the central mirror, the light is reflected off at a slight angle, and, as the instrument rotates, its movement is shown by the spot of light traveling round the ceiling in a circle. Here again the fog helps us, for it gives us an imponderable beam of light moving round the room like a solid body, and saving you the trouble of looking up at the ceiling. I now set the radiometer moving round by the light of a candle, and I want to show you that colored light does not very much interfere with the movement. I place yellow glass in front, and the movement is scarcely diminished at all. Very deep-colored glass, you see, diminishes it a little more. Blue and green glass make it go a little slower, but still do not diminish the speed one-half. I now place a screen of water in front: the instrument moves with diminished velocity, rotating with about one-fourth its original speed.
Taking the action produced by a candle-flame as 100 Yellow glass reduces it to 89 Red """ 11 Blue """ 56 Green """ 56 Water "" 26 Alum "" 15
I now move the candle a little distance off, so as to make the instrument move slower, and bring a flask of boiling water close to it. See what happens. The luminous index no longer moves steadily, but in jerks. Each disk appears to come up to the boiling water with difficulty, and to hurry past it. More and more sluggishly do they move past, until now one has failed to get by, and the luminous beam, after oscillating to and fro a few times, comes to rest. I now gradually bring the candle near. The index shows no movement. Nearer still. There is now a commencement of motion, as if the radiometer were trying to push past the resistance offered by the hot water; but it is not until I have brought the candle to within a few inches of the glass globe that rotation is recommenced. On these pith radiometers the action of dark heat is to repel the black and white surfaces almost equally, and this repulsion is so energetic as to overcome the rotation caused by the candle, and to stop the instrument.
With a radiometer constructed of a good conductor of heat, such as metal, the action of dark heat is different. Here is one made of silvered copper, polished on one side and lampblacked on the other. I have set it moving with a candle slightly the normal way. Here is a glass shade heated so that it feels decidedly warm to the hand. I cover the radiometer with it, and the rotation first stops, and then recommences the reverse way. On removing the hot shade the reverse movement ceases and normal rotation recommences.
If, however, I place a hot glass shade over a pith radiometer, the arms at once revolve the normal way, as if I had exposed the instrument to light. The diametrically opposite behavior of a pith and a metal instrument when exposed to the dark heat radiated from a hot glass shade is very striking. The explanation of the action is not easy, but it depends on the fact that the metal is one of the best conductors of heat, while pith is one of the worst.
One more experiment with this metallic radiometer. I heat it strongly with a spirit-lamp, and the arms spin round rapidly. Now the whole bulb is hot, and I remove the lamp: see what happens. The rotation quickly diminishes. Now it is at rest; and now it is spinning round just as fast the reverse way. I can produce this reverse movement only with difficulty with a pith instrument. The action is due to the metal being a good conductor of heat. As it absorbs heat it moves one way; as it radiates heat it moves the opposite way.
At first I made these instruments of the very lightest material possible, some of them not weighing more than half a grain; and, where extreme sensitiveness is required, lightness is essential. But the force which carries them round is quite strong enough to move a much greater weight. Thus the metallic instrument I have just experimented with weighs over thirteen grains, and here is one still heavier, made of four pieces of looking-glass blacked on the silvered side, which are quickly sent round by the impact of this imponderable agent, and flash the rays of light all round the room when the electric lamp is turned on the instrument.
Before dismissing this instrument let me show one more experiment. I place the looking-glass and the metal radiometer side by side, and, screening the light from them, they come almost to rest. Their temperature is the same as that of the room. What will
Fig. 9.
happen if I suddenly chill them? I pour a few drops of ether on each of the bulbs. Both instruments begin to revolve. But notice the difference. While the movement in the case of the metal radiometer is direct, that of tire looking-glass instrument is reverse. And yet to a candle they both rotate the same way, the black being repelled.
Now, having found that this force would carry round a comparatively heavy weight, another useful application suggested itself. If I can carry round heavy mirrors or plates of copper, I can carry round a magnet. Here, then (Fig. 9), is an instrument carrying a magnet, and outside is a smaller magnet, delicately balanced in a vertical position, having the south pole at the top and the north pole at the bottom. As the inside magnet comes round, the outside magnet, being delicately suspended on its centre, bows backward and forward, and, making contact at the bottom, carries an electric current from a battery to a Morse instrument. A ribbon of paper is drawn through the "Morse" by clock-work, and at each contact—at each revolution of the radiometer—a record is printed on the strip of paper by dots; close together if the radiometer revolves quickly, farther apart if it goes slower.
Here the inner magnet is too strong to allow the radiometer to start with a faint light without some initial impetus. Imagine the instrument to be on the top of a mountain, away from everybody, and I wish to start it in the morning. Outside the bulb are a few coils of insulated copper wire, and by depressing the key for an instant I pass an electric current from the battery through them. The interior magnet is immediately deflected from its north-south position, and the impetus thus gained enables the light to keep up the rotation. In a proper meteorological instrument I should have an astatic combination inside the bulb, so that a very faint light would be sufficient to start it, but in this case I am obliged to set it going by an electric current. I have placed a candle near the magnetic radiometer. I now touch the key; the instrument immediately responds; the paper
Fig. 10.
unwinds from the Morse instrument, and on it you will see dots in regular order. I put the candle eight inches off, and the dots come wide apart. I place it five and three-quarters, inches off, and two dots come where one did before. I bring the candle four inches from the instrument, and the dots become four times as numerous (Fig. 10), thus recording automatically the intensity of the light falling on the instrument, and proving that in this case also the radiometer obeys the law of inverse squares.
This instrument, the principle of which I have illustrated to-night, is not a mere toy or scientific curiosity, but it is capable of giving much useful information in climatology. You are well aware that the temperature, the rainfall, the atmospheric pressure, the direction and force of the wind, are now carefully studied in most countries, in order to elucidate their sanitary condition, their animal and vegetable productions, and their agricultural capabilities. But one most important element, the amount of light received at any given place, has been hitherto but very crudely and approximately estimated, or rather guessed at. Yet it cannot be denied that sunlight has its effect upon life and health, vegetable, animal, and human, and that its relative amount at any place is hence a point of no small moment. The difficulty is now overcome by such an instrument as this. The radiometer may be permanently placed on some tall building, or high mountain, and, by connecting it by telegraphic wires to a central observatory, an exact account can be kept of the proportion of sunlight received in different latitudes, and at various heights above the sea-level. Furthermore, our records of the comparative temperature of different places have been hitherto deficient. The temperature of a country depends partly on the amount of rays which it receives direct from the sun, and partly on the atmospheric and oceanic currents, warm or cold, which sweep over or near it. The thermometer does not discriminate between these influences; but the radiometer will enable us now to distinguish how much of the annual temperature of a place is due to the direct influence of the sun alone, and how much to the other factors above referred to.
I now come to the last question which I stated at the beginning of this lecture, "What is the amount of force exerted by radiation?" Well, I can calculate out the force in a certain way, from data supplied by this torsion apparatus (Fig. 4). Knowing the weight of the beam, the power of the torsion fibre of glass, its time of oscillation, and the size of the surface acted on, it is not difficult to calculate the amount of force required to deflect the beam through a given angle; but I want to get a more direct measure of the force. I throw a ray of light upon one of these instruments, and it gives a push; surely it is possible to measure the amount of this push in parts of a grain. This I have succeeded in doing in the instrument behind me; but before showing the experiment I want to illustrate the principle upon which it depends. Here is a very fine glass fibre suspended from an horizontal bar, and I wish to show you the strength of it. The fibre is only a few thousandths of an inch thick; it is about three feet long, and at the lower end is hanging a scale-pan, weighing 100 grains. So I start with a pull of 100 grains on it. I now add little lead weights, 50 grains each, till it breaks. It bears a pull of 750 grains, but gives way when additional weight is added. You see, then, the great strength of a fibre of glass, so fine as to be invisible to all who are not close to it, to resist a tensile strain.
Now I will illustrate another equally important property of a glass thread, viz., its power to resist torsion. Here is a still finer glass thread, stretched horizontally between two supports; and in order to show its position I have put little jockeys of paper on it. One end is cemented firmly to a wooden block, and the other end is attached to a little instrument called a counter—a little machine for registering the number of revolutions. I now turn this handle till the fibre breaks, and the counter will tell me how many twists I have given this fibre of glass. You see it breaks at twenty revolutions. This is rather a thicker fibre than usual. I have had them bear more than 200 turns without breaking, and some that I have worked with are so fine that if I hold one of them by the end it curls itself up and floats about the room like a piece of spider's thread.
Having now illustrated these properties of glass fibres, I will try to show a very delicate experiment. I want to ascertain the amount of pressure which radiation exerts on a blackened surface. I will put a ray of light on the pan of a balance, and give you its weight in grains, for I think in this Institution and before this audience I may be allowed a scientific use of the imagination, and may speak of weighing that which is not affected by gravitation.
The principle of the instrument is that of W. Ritchie's torsion balance, described by him in the "Philosophical Transactions" for 1830. The construction is somewhat complicated, but it can be made out on reference to the diagram (Fig. 11). A light beam, A B, having two square inches of pith, C, at one end, is balanced on a very fine fibre of glass, D D', stretched horizontally in a tube; one end of the fibre being connected with a torsion handle, E, passing through the tube, and indicating angular movements on a graduated circle. The beam is cemented to the torsion fibre, and the whole is inclosed in glass, and connected with the mercury pump by a spiral tube, F, and exhausted as perfectly as possible. G is a spiral spring, to keep the fibre in a uniform state of tension, H is a piece of cocoon silk. I is a glass stopper, which is ground into the tube as perfectly as possible, and then highly polished and lubricated with melted India-rubber, which is the only substance I know that allows perfect lubrication and will still hold a vacuum. The pith, C, represents the scale-pan of the balance. The cross-beam A B, which carries it, is cemented firmly to the thin glass fibre, D, and in the centre is a piece of mirror, K. Now, the cross-beam A B and the fibre D being rigidly connected together, any twist which I give to the torsion handle E will throw the beam out of adjustment. If, on the other hand, I place a weight on the piece of pith C, that end of the beam will fall down, and I shall have to turn the handle, E, round and round a
Fig. 11.
certain number of times, until I have put sufficient torsion on the fibre D to lift up the beam. Now, according to the law of torsion, the force with which a perfectly elastic body like glass tends to untwist itself is directly proportional to the number of degrees through which it has been twisted; therefore, knowing how many degrees of torsion I must put on the fibre to lift up the 1100 of a grain weight, I can tell how many degrees of torsion are required to lift up any other weight; and conversely, putting an unknown weight or pressure on the pith, I can find its equivalent in grains by seeing how much torsion it is equal to. Thus, if 1100 of a grain requires 10,000° of torsion, 150 of a grain would require 20,000°; and conversely, a weight which required 5,000° torsion would weigh 1200 of a grain. Once knowing the torsion equivalent of 1100 of a grain, the ratio of the known to the unknown weights is given by the degrees of torsion.
Having thus explained the working of the torsion balance I will proceed to the actual experiment. On the central mirror I throw a ray from the electric light, and the beam reflected on a particular spot of the ceiling will represent zero. The graduated circle J of the instrument also stands at zero, and the counter which I fasten on at the end L stands at O. The position of the spot of light reflected from the little concave mirror being noted, the torsion balance enables me to estimate the pressure or weight of a beam of light to a surprising degree of exactness. I lift up my little iron weight by means of a magnet (for working in a vacuum I am restricted in the means of manipulating), and drop it in the centre of the pith: it knocks the scale-pan down, as if I had placed a pound weight upon an ordinary balance, and the index-ray of light has flown far from the zero-point on the ceiling. I now put torsion on the fibre to bring the beam again into equilibrium. The index-ray is moving slowly back again. At last it is at zero, and on looking at the circle and counter I see that I have had to make 27 complete revolutions and 301°, or 27 ${\displaystyle \times }$ 360° ${\displaystyle +}$ 301° ${\displaystyle =}$ 10,021°, before the force of torsion would balance the 1100 of a grain.
I now remove the weight from the pith-pan of my balance, and liberate the glass thread from torsion by twisting it back again. Now the spot of light on the ceiling is at zero, and the counter and index are again at O.
Having thus obtained the value of the 1100 of a grain in torsion degrees, I will get the same for the radiation from a candle. I place a lighted candle exactly 6 inches from the blackened surface, and on removing the screen the pith scale-pan falls down, and the index-ray again flies across the ceiling. I now turn the torsion handle, and in much less time than in the former case the ray is brought back to zero. On looking at the counter I find it registers four revolutions, and the index points to 188°, making altogether 360° ${\displaystyle \times }$ 4 ${\displaystyle +}$ 188 ${\displaystyle =}$ 1628°, through which the torsion fibre has to be twisted to balance the light of the candle.
It is an easy calculation to convert this into parts of a grain weight; 10,021 torsion degrees representing 0.01 grain, 1628 torsion degrees represent 0.001624 grain.
10,021° : 0.01 grain::1628° : 0.001624 grain.
The radiation of a candle 6 inches off, therefore, weighs or presses the two square inches of blackened pith with a weight of 0.001624 grain. In my own laboratory, working with this torsion balance, I found that a candle 6 inches off gave a pressure of 0.001772 grain. The difference is only 0.000148 grain, and is fairly within the allowable limits of a lecture experiment. But this balance is capable of weighing to far greater accuracy than that. You have seen that a torsion of 10,021° balanced the hundredth of a grain. If I give the fibre 1 more twist the weight is overbalanced, as shown by the movement of the index-ray on the ceiling. Now 1° of torsion is about the 110000 part of the whole torsion required by the 1100 grain. It represents, therefore, the 110000 part of the 1100, or the millionth part of a grain.
Divide a grain-weight into a million parts, place one of them on the pan of the balance, and the beam will be instantly depressed!
Weighed in this balance the mechanical force of a candle 12 inches off was found to be 0.000444 grain; of a candle 6 inches off, 0.001772 grain. At half the distance the weight of radiation should be four times, or 0.001776 grain; the difference between theory and experiment being only four-millionths of a grain is a sufficient proof that the indications of this instrument, like those of the apparatus previously described, follow the law of inverse squares. An examination of the differences between the separate observations and the mean shows that my estimate of the sensitiveness of this balance is not excessive, and that in practice it will safely indicate the millionth of a grain.
I have only had one opportunity of getting an observation of the weight of sunlight: it was taken on December 13th, but the sun was so obscured by thin clouds and haze that it was only equal to 10.2 candles 6 inches off. Calculating from this datum, it is seen that the pressure of sunshine is 2.3 tons per square mile.
But, however fair an equivalent ten candles may be for a London sun in December, a midsummer sun in a cloudless sky has a very different value. Authorities differ as to its exact equivalent, but I underestimate it at 1,000 candles 12 inches off.
Let us see what pressure this wall give: A candle 12 inches off, acting on 2 square inches of surface, was found equal to 0.000444 grain; the sun, equaling 1,000 candles, therefore gives a pressure of 0.444000 grain; that is equal to about 32 grains per square foot, to 2 cwts. per acre, 57 tons per square mile, or nearly 3,000,000,000 tons on the exposed surface of the globe—sufficient to knock the earth out of its orbit if it came upon it suddenly.
It may be said that a force like this must alter our ordinary ideas of gravitation; but it must be remembered that we only know the force of gravity as between bodies such as they actually exist, and we do not know what this force would be if the temperatures of the gravitating masses were to undergo a change. If the sun is gradually cooling, possibly its attractive force is increasing, but the rate will be so slow that it will probably not be detected by our present means of research.
While showing this experiment I wish to have it distinctly understood that I do not attach the least importance to the actual numerical results. I simply wish to show you the marvelous sensitiveness of the apparatus with which I am accustomed to work. I may, indeed, say that I know these rough estimates to be incorrect. It must be remembered that our earth is not a lampblacked body inclosed in a glass case, nor is its shape such as to give the maximum of surface with the minimum of weight. The solar forces which perpetually pour on it are not simply absorbed and degraded into radiant heat, but are transformed into the various forms of motion we see around us, and into the countless forms of vegetable, animal, and human activity. The earth, it is true, is poised in vacuous space, but it is surrounded by a cushion of air; and, knowing how strongly a little air stops the movement of repulsion, it is easy to conceive that the sun's radiation through this atmospheric layer may not produce any important amount of repulsion. It is true the upper surface of our atmosphere must present a very cold front, and this might suffer repulsion by the sun; but I have said enough to show how utterly in the dark we are as to the cosmical bearings of this action of radiation, and further speculation would be but waste of time.
It may be of interest to compare these experimental results with a calculation made in 1873, before any knowledge of these facts had been made public.
Prof. Clerk Maxwell, in his "Electricity and Magnetism," vol. ii., p. 391, writes as follows: "The mean energy in one cubic foot of sunlight is about 0.0000000882 of a foot-pound, and the mean pressure on a square foot is 0.0000000882 of a pound-weight. A flat body exposed to sunlight would experience this pressure on its illuminated side only, and would therefore be repelled from the side on which the light falls."
Calculated out, this gives the pressure of sunlight equal to about two and a half pounds per square mile. Between the two and a half pounds deduced from calculation and the fifty-seven tons obtained from experiment the difference is great; but not greater than is often the case between theory and experiment.
In conclusion, I beg to call especial attention to one not unimportant lesson which may be gathered from this discovery. It will be at once seen that the whole springs from the investigation of an anomaly. Such a result is by no means singular. Anomalies may be regarded as the finger-posts along the high-road of research, pointing to the by-ways which lead to further discoveries. As scientific men are well aware, our way of accounting for any given phenomenon is not always perfect. Some point is perhaps taken for granted, some peculiar circumstance is overlooked. Or else our explanation agrees with the facts not perfectly, hut merely in an approximate manner, leaving a something still to be accounted for. Now, these residual phenomena, these very anomalies, may become the guides to new and important revelations.
In the course of my research anomalies have sprung up in every direction. I have felt like a traveler, navigating some mighty river in an unexplored continent. I have seen to the right and the left other channels opening out, all claiming investigation, and promising rich rewards of discovery for the explorer who shall trace them to their source. Time has not allowed me to undertake the whole of a task so vast and so manifold. I have felt compelled to follow out, as far as lay in my power, my original idea, passing over reluctantly the collateral questions springing up on either hand. To these I must now invite the attention of my fellow-workers in science. There is ample room for many inquirers.
Nor must we forget that the more rigidly we scrutinize our received theories, our routine explanations and interpretations of Nature, and the more frankly we admit their shortcomings, the greater will he our ultimate reward. In the practical world fortunes have been realized from the careful examination of what has been ignorantly thrown aside as refuse; no less, in the sphere of science, are reputations to be made by the patient investigation of anomalies.—Advance Sheets of Quarterly Journal of Science.
1. A lecture delivered at the Royal Institution. | 2017-02-28 01:31:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.554632306098938, "perplexity": 904.6121645963779}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00465-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/356291/equivalence-of-antiderivative-in-l1-sense-and-in-the-usual-sense/356297 | # Equivalence of antiderivative in L1 sense and in the usual sense
We say that$$\ f$$ is differentiable w.r.t to $$L_1$$ if there exists a$$\ g$$ such that: $$\lim_{h\to 0}\left\Vert\frac{f(x+h)-f(x)}{h} - g(x)\right\Vert_1 = 0$$ where $$\Vert \cdot \Vert_1$$ is the $$L_1$$ norm. Since $$f$$ is in $$L_1$$, the corresponding$$\ g$$ must be in$$\ L_1$$ too, and so by Lebesgue, it has an antiderivative $$G$$ which is differentiable a.e, with $$G'(x)=g(x)$$.
My question is: does $$f=G$$ a.e?
Here is my line of thought: if $$G$$ is in $$L_1$$, it can be shown that $$\hat{g}{(t)} = 2\pi it\hat{G}{(t)} = 2\pi it\hat{f}{(t)},$$ which then implies that $$f=G$$ a.e. and so, in order to show that $$f=G$$ a.e, it is enough to show that$$\ G$$ is in$$\ L_1$$, and that's where i got stuck.
• I guess everything's happening on $\mathbb R$? – LSpice Apr 1 at 17:03
• This seems loosely related to Proposition 9.3 in "Haïm Brezis: Functional Analysis, Sobolev Spaces and Partial Differential Equations (2011)". – Jochen Glueck Apr 1 at 20:00
Most antiderivative of $$g$$ are not in $$L^1(\mathbb{R})$$, in your case only one antiderivative $$G_0$$ will be in $$L^1(\mathbb{R})$$, the one actually equal to $$f$$. All the other antiderivatives $$G$$ are equal to $$G_0 + c$$, with $$c \neq 0$$, which is not in $$L^1(\mathbb{R})$$.
To proof that there exist one antiderivative $$G_0$$ in $$L^1(\mathbb{R})$$. You start by noticing that your $$L^1(\mathbb{R})$$ differentiability imply differentiability in the distributional sens so, for $$\phi \in \mathcal{C}_{comp}^\infty(\mathbb{R})$$, we have $$\langle f',\phi \rangle = \langle g,\phi \rangle. \qquad (1)$$ Fix $$G$$ an antiderivative of $$g$$, you have $$G' = g$$ in the distributional sens. The equation $$(1)$$ become $$\langle f',\phi \rangle = \langle G',\phi \rangle \implies \langle (f-G)',\phi \rangle = 0$$ and than imply $$f-G = c$$ a constant. Choosing $$G_0 = G + c$$, we have $$G_0 = f \in L^1(\mathbb{R})$$.
You can further show that there exists a constant $$c_0$$ such that $$G_0(x) = c_0 + \int_0^x g(y)\, dy.$$ | 2020-08-04 23:29:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 44, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9486714601516724, "perplexity": 133.40632779761822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735885.72/warc/CC-MAIN-20200804220455-20200805010455-00463.warc.gz"} |
http://www.maths.usyd.edu.au/u/AlgebraSeminar/14abstracts/street14.html | # Ross Street (Macquarie University)
## Friday 15 August, 12:05-12:55pm, Place: 373
### The Dold-Kan Theorem and categories of groupoid representations
This joint work with Stephen Lack began by our examining an equivalence of categories that occurs in the paper [Church-Ellenberg-Farb, FI-modules: a new approach to stability for $$S_n$$-representations'', arXiv:1204.4533v2]. Here $$\mathrm{FI}$$ is the category of finite sets and injective functions, while an $$\mathrm{FI}$$-module is a functor $$\mathrm{FI}\to \mathrm{Mod}_R$$ into a category of modules. Let $$\mathfrak{S}$$ denote the symmetric groupoid: that is, the category of finite sets and bijective functions. The paper [ibid.] shows stability aspects of the representation theory of the symmetric groups can be studied profitably via $$\mathrm{FI}$$-modules. Important examples of $$\mathrm{FI}$$-modules in this story are in fact $$\mathrm{FI\#}$$-modules, where $$\mathrm{FI\#}$$ is the category of finite sets and {\em injective partial} functions. We believe a vital part of the applicability of $$\mathrm{FI}$$-modules to these representations is the equivalence of categories $$[\mathrm{FI\#},\mathrm{{Mod}_R}]\simeq [\mathfrak{S},\mathrm{{Mod}_R}]$$, where $$[\mathcal{A},\mathcal{B}]$$ denotes the category of functors $$\mathcal{A}\to\mathcal{B}$$ and natural transformations between them. The generalisation I will present not only gives a similar equivalence for other classical groupoids but also includes the Dold-Kan equivalence between chain complexes of $$R$$-modules and simplicial $$R$$-modules. | 2017-12-17 15:54:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9111968874931335, "perplexity": 404.556004198094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948596115.72/warc/CC-MAIN-20171217152217-20171217174217-00655.warc.gz"} |
https://mathstodon.xyz/users/bremner/statuses/101882267703479832 | I just posted a scary post to the #Mailpile blog: mailpile.is/blog/2019-04-06_Bu
... telling the world I'm burned out and admitting that the project has suffered for it, is maybe not the smartest P.R. move.
But I think people deserve the truth, and it's not all bad news. And besides, the shame and stigma around mental health stuff needs to die, and admitting weakness is part of that.
Tell me I'm right? 😳
So, like, that blog post was great and all.
And the other work I did today, it was good too.
But my main accomplishment, the thing I'm most proud of today: I helped my daughter get a pea out of her nose, without hurting her or stressing her out. 👨👧 🥇
@HerraBRE Did you hurt the pea?
@bremner Not with my daughter present. But it's not going to try that again, I can promise you.
A Mastodon instance for maths people. The kind of people who make $\pi z^2 \times a$ jokes.
Use $ and $ for inline LaTeX, and $ and $ for display mode. | 2019-07-20 10:23:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5309379696846008, "perplexity": 3943.656861784895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526506.44/warc/CC-MAIN-20190720091347-20190720113347-00333.warc.gz"} |
https://labs.globus.org/blog/2020-07-11-Funcx.html |
# High Performance Function(s) as a Service
Written by Maksim Levental on July 11, 2020
# Motivation
Are you a scientist that suffers from bursitis1? Do you loathe dealing with runtimes and infrastructure? Is your favorite calculus the $\lambda$-calculus? Do you have commitment issues with respect to your cloud provider? Well do I have an offering for you; presenting a High Performance Function(s) as a Service (HPFaaS) Python framework called funcX.
In all seriousness though; HPFaaS is a software development paradigm where the fundamental unit of computation is the function and everything else is abstracted away. Availing oneself of these abstractions enables one to benefit from data-compute locality2 and distribution to heterogeneous resources (such as GPUs, FPGAs, and ASICs). Another name for this kind of software is “serverless computing”; in the context of the kinds of workloads that scientists typically have, we call this “serverless supercomputing”.
Some example projects that use funcX are:
• Synchrotron Serial Crystallography is a method for imaging small crystal samples 1–2 orders of magnitude faster than other methods; using funcX SSX researchers were able to discover a new structure related to COVID
• DLHub uses funcX to support the publication and serving of ML models for on-demand inference for scientific use cases
• Large distributed file systems produce new metadata at high rates; Xtract uses funcX to extract metadata colocated with the data rather than by aggregating centrally
• Real-time High-energy Physics analysis using Coffea and funcX can accelerate studies of decays such as H$\rightarrow$bb
# But what isfuncX?
funcX works by deploying the funcX endpoint agent on an arbitrary computer, registering a funcX function with a centralized registry, and then calling the function using either the Python SDK or a REST API. So that we can get to the fun stuff quickly we defer discussion of deploying a funcX endpoint until the next section and make use of the tutorial endpoint.
To declare a funcX function you just define a conventional Python function like so
def funcx_sum(items):
return sum(items)
et voila! To register the function with the centralized funcX function registry service we simply call register_function:
from funcx.sdk.client import FuncXClient
fxc = FuncXClient()
func_uuid = fxc.register_function(
funcx_sum,
description="A summation function"
)
The func_uuid is then used to call the function on an endpoint; using the tutorial endpoint_uuid:
endpoint_uuid = '4b116d3c-1703-4f8f-9f6f-39921e5864df'
items = [1, 2, 3, 4, 5]
res = fxc.run(
items,
endpoint_id=endpoint_uuid,
function_id=func_uuid
)
fxc.get_result(res)
>>> 15
And that’s all there is to it! The only caveat (owing to how funcX serializes functions) is that all libraries/packages used in the function need to be imported within the body of the function, e.g.
def funcx_sum_2(items):
from numpy import sum
return sum(items)
## Deploying a funcX endpoint
An endpoint is a persistent service launched by the user on their compute system that serves as a manager for routing requests and executing functions on that compute system. Deploying a funcX endpoint is eminently straightforward. The endpoint can be configured to connect to the funcX webservice at funcx.org. Once the endpoint is registered, you can invoke functions to be executed on it.
You can pip install funcx to get the funcX package onto your system. Having done this, initiating funcX will ask you to authenticate with Globus Auth:
\$ funcx-endpoint init
Please paste the following URL in a browser:
https://auth.globus.org/v2/oauth2/authorize?client_id=....
funcX requires authentication in order to associate endpoints with users and enforce authentication and access control on the endpoint.
Creating, starting, and stopping the endpoint is as simple as
funcx-endpoint configure <ENDPOINT_NAME>
and
funcx-endpoint start <ENDPOINT_NAME>
and
funcx-endpoint stop <ENDPOINT_NAME>
How to set configuration parameters and other details are available in the documentation but there’s not much more to it than that. You can deploy endpoints anywhere that you can run pip install funcx.
## Architecture and Implementation
funcX consists of endpoints and a registry that publishes endpoints and registered functions:
Each endpoint runs a daemon that spawns managers that themselves orchestrate a pool of workers that run funcX functions within containers3:
The endpoint also implements fault tolerance facilities using a watch dog process and heartbeats from the managers.
Communication between the funcX service, the endpoints, and the managers is all over ZeroMQ. For all of the misers4 in the audience, funcX implements all of the standard optimization strategies to make execution more efficient with respect to latency and compute (memoization, container warming, request batching). For the paranoiacs4 in the audience, funcX authenticates and authorizes registering and calling functions using Globus Auth and sandboxes functions using containerization and file system namespacing therein. More details (along with performance metrics and comparisons with commercial competitors) are available in the funcX paper.
# Conclusion
funcX is for scientists that have compute needs that fluctuate dramatically in time and resource requirements. The project is open source (available on GitHub) and provides a binder instance that you can immediately experiment with. If you have any questions or you’re interested in contributing feel free to reach out to the project or myself directly! | 2020-08-11 18:16:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22158335149288177, "perplexity": 4965.840990705232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738819.78/warc/CC-MAIN-20200811180239-20200811210239-00459.warc.gz"} |
https://math-physics-problems.wikia.org/wiki/Simple_Harmonic_Oscillator | 303 Pages
Problem
Consider the position function .
Part 1: Determine the value of if this function solves the differential equation:
.
Part 2: Try to explain what each term of the above differential equation means.
Solution
Part 1
Take two time derivatives:
.
Consequently,
.
Divide away :
.
Therefore .
This quantity is the angular frequency of the wave. Mathematically speaking, it is the eigenvalue of the differential equation.
Part 2
Multiply the entire equation by :
.
Since acceleration is the second time derivative of position
.
This is Newton’s second law with the spring force being the net force!
Community content is available under CC-BY-SA unless otherwise noted. | 2021-07-25 03:35:44 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9021642208099365, "perplexity": 881.0589163574991}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151563.91/warc/CC-MAIN-20210725014052-20210725044052-00301.warc.gz"} |
https://space.stackexchange.com/questions/40014/why-is-spacex-not-also-working-on-a-smaller-version-of-starship/40018 | # Why is SpaceX not also working on a smaller version of Starship?
SpaceX is planning to retire Falcon 9, which would leave it with only Starship. While a Starship launch is expected to be cheaper than a Falcon 9 one, a downscaled Starship launch would be cheaper still.
Moreover, a downscaled Starship would be easier to develop and iterate upon, paving the way for the full scale Starship. It could be as similar as possible to the full scale Starship (but of course allometrically scaled). So it might even lower the total development cost and risk.
It could conceivably use seven times less Falcon engines, ending up with five on the first stage, and one on the second. That smaller Starship could be used for almost all the satellite launches, as it would still be a respectable size.
• They just trimmed the top off of it? twitter.com/NASASpaceflight/status/1197265917589303296 – ta.speot.is Nov 21 '19 at 2:49
• "a downscaled Starship launch would be cheaper still" why would you think so? Things don't scale effortlessly, up or down. Even the mere fact of having two launchers instead of one introduces extra expenses. And that's completely ignoring R&D, which is a significant part of the costs of dealing with rockets in general. – Luaan Nov 21 '19 at 12:11
• The original design a few years ago was based around a 12m diameter. Musk has also said an 18m design might come at some point. Maybe the current 9m design is the smaller version... – James Thorpe Nov 22 '19 at 9:18
• The Falcon 9 uses two types of Merlin engines: sea-level optimized, and vacuum-optimized variants. The Merlin is a LOX / RP1 gas-generator cycle engine with thrust around 900 kn. The Starship and Superheavy booster are designed to use two variants of the new new Raptor engine. The Raptor is a LOX / Liquid Methane full-flow staged combustion engine and has a thrust of around 2400 kn. – Dragongeek Nov 24 '19 at 13:48
There are several compelling engineering and design reasons why a bigger spaceship makes sense and several reasons why making a mini-starship does not make sense for SpaceX specifically (and their vision).
First and foremost, Elon Musk has made it clear that his goal for the company and the future isn't to provide cheap satellite launch capabilities, it's to put people on Mars. Building an inferior version of the Starship isn't on the "critical path" to putting people on Mars for Elon. SpaceX is very much Elon Musk's company and it follows his vision.
That said, here are a couple other reasons why SpaceX specifically might not want to make a Mini-Starship (although a Mini-Starship might make sense for some manufacturers):
• Bigger rockets can be more efficient, fuel-consumption wise, as dry mass can be a smaller portion of the overall rocket mass. For example, if your flight computer and avionics system masses 100 kg, it would have a large effect on a rocket which can only lift 1000kg to LEO. If your rocket is able to put up 100,000kg at a time, the 100kg flight computer suddenly becomes much less of the overall dry mass and thus your rocket is more efficient.
• Building big vs building small. One of the things that is "revolutionary" in Starship's current design/construction progress is that it's being welded together in a field, outdoors. SpaceX believes that, when your rocket is big, you can get away with looser tolerances, which in turn equates to money saved. For example, the Sea Dragon proposal was designed to be assembled in a shipyard and had an enormous lift capability for a very low cost. Starship is similar. If a Mini-Starship were constructed, it would probably require tighter tolerances and smaller, more exact parts. Really, this boils down to my first point again. If you're building a small rocket, every weld, bolt, and wire has a more significant impact on a lighter, smaller rocket's efficiency.
• SpaceX believes that a design similar to the two Starships currently being prototyped will work. If they instead scrapped what they have developed so far and built a Mini-Starship, they'd have to essentially start from zero again. The design for a rocket is so complex, that you can't just resize it and have a functional design afterwards. SpaceX learned this lesson the hard way when developing Falcon Heavy. Initially, Elon had thought that it would just be "strapping three boosters together" but in the end, SpaceX had to develop the center core almost entirely from scratch.
• SpaceX has a limited amount of employees and money. If they decided to work on a Starship and Mini-Starship concurrently, the pace on both projects would be cut in half if not more. Elon has stated that development speed is critically important to him and diluting his engineers with extra projects means that Starship takes longer to build.
• The smallsat launch market is heating up. Several companies, notably Rocket Labs, are focused on sending small satellites to orbit. SpaceX is betting that there will be no shortage of demand for mass sent to space and then it won't matter how big the rocket is that goes up, as they'll be able to fill every rocket with payloads. Even then, the 2 million launch cost is so low that even if Starship flies mostly empty, it would still be making a profit. Addendum: In the comments several concerns have been raised: • Building a Mini-Starship would be cheaper than a full sized Starship • On this, I don't disagree. If you were starting from scratch, building a 1/8 scale starship would probably be cheaper than a full sized one. • SpaceX has already sunk a lot of money and development time into ITS/BFR/Starship over the years. Simply shelving these efforts to work on an inferior rocket would be seen as a waste of money. • While it would be cheaper, I don't think it would be significantly cheaper. The big costs are the Raptor engines as they're highly complex and are a revolutionary design. Still, all Starship prototypes we've seen so far (Starhopper and the late MK1) haven't been equipped with the full compliment of engines. Starhopper only had one and the MK1 was only briefly test-fit with three engines. • Starship and the Superheavy booster are two different craft. Starship is designed to be equipped with three Sea-level raptor engines and three vacuum-optimized engines for redundancy. On a Mini-Starship, only a single Raptor engine would fit which wouldn't be efficient and wouldn't provide the redundancy that multiple engines allow. An entirely new Merlin-sized Methalox engine would need to be developed which just isn't going to happen when SpaceX has the working Raptor in production. • A Mini-Starship could be iterated more rapidly • Maybe, but neither the Mini-Starship or Starship are small desk-top or garage-shop construction projects. While working with smaller components would be easier, cranes and heavy equipment would still be required for both. • The labor that's connected to the actual size of the rocket doesn't take up much of the overall assembly time. Yes, welding together a bigger rocket takes longer than a small one, but installing avionics, sensors, engines, wires, and writing software wouldn't suddenly become quicker if the rocket were physically smaller. • Mini-Starship could provide cheap launches while Starship is developed • Starship (or Mini-Starship) are not SSTO vehicles. They both require the additional development of the Superheavy booster. Also, there is no reason why SpaceX needs to stop operating the Falcon 9 series. They have plenty of rockets with plenty of reuse cycles left and despite the non-reusable second stage, Falcon 9 can easily undercut the price of any other launch provider. Until the Vulcan or Ariane 6 rocket become operational, SpaceX won't even have to lower their prices (and profit margins) for Falcon 9 to be the cheapest provider. • Beliefs that upscaling the Mini-Starship to a full sized one would be easy or lots could be shared during development • I'm not quite sure where this belief is coming from. In the engineering world, you can't just scale something up and expect it to work. For example, take the Cessna 172, which is the most built aircraft of all time. Why don't larger planes just look like the Cessna but bigger? Because, simply scaling up a design, no matter how fantastic just doesn't always make sense. • The parts of development that could be shared don't justify building an entirely new rocket. The engines are already built and building fuel tanks and the actual structure isn't simple per se but it's rather straightforward compared to developing a FFSC engine. Aerodynamics and in-flight control would also be explored in a Mini-Starship and maybe heat-shield tech, and would be transferable, but again, SpaceX clearly doesn't believe this would be cost-effective • SpaceX could reap profits in the smallsat market with Mini-Starship • Just because the smallsat market is heating up doesn't mean that bigger satellites are going anywhere. Especially with Starlink, SpaceX isn't hurting for cash or launch contracts. Dealing with smallsat launches where each satellite only pays like 50k isn't where SpaceX sees profits. • "When your rocket is big, you can get away with looser tolerances, " this is yet to be proven. The sloppy-looking welds on the vehicle in Boca Chica scare me. – Organic Marble Nov 20 '19 at 18:38 • @OrganicMarble Fixed – Dragongeek Nov 20 '19 at 18:40 • @OrganicMarble Especially on the windward side, during the hypersonic flight regime. How will those bumps and welds affect the hypersonic flow? Terrifying to think about. And how do you model that in a wind tunnel? – geoffc Nov 20 '19 at 18:53 • @OrganicMarble Prescient comment. It blew up a few hours after you made that comment. – ceejayoz Nov 21 '19 at 2:18 • "Blew up" -- they were conducting an over pressure test and reached the point it couldn't take the over pressure. It looks dramatic, but they knew going in it was a possibility. – Rob Crawford Nov 21 '19 at 22:01 There was some talk of modifying Falcon 9's second stage to give it full reusability in November of 2018. The idea was that a reusable second stage would be used to test out Starship technologies. This effort was scrapped 10 days later in favor of accelerating development on the current stainless steel design of starship. The superheavy/starship architecture is designed to facilitate exploration and exploitation of space beyond simple Low Earth Orbit, which necessitates a larger booster and spacecraft. Since development costs tend to outpace materials, manufacturing, and fuel costs; it makes more sense to build a single large vehicle that can do many jobs than two vehicles for separate missions. • This answer assumes that developing a smaller Starship in parallel would significantly increase the total development costs, which is unclear to me. As I wrote in my question, developing a smaller version could even decrease some of the total costs. Falcon 9 is a very different rocket, so it being scrapped doesn't seem like strong evidence. – Stephane Bersier Nov 23 '19 at 23:55 Similar to how a small plane can't really carry any people around the world, but a larger plane can, it turns out there are challenges to making really small rockets. These are magnified when you take in to account full reusability. It turns out that Starship is about the smallest spacecraft that makes sense for a fully reusable spacecraft. A few things to consider. The pressure of the tanks has to remain about the same. The strength of a tank is pretty much dependent on the thickness of the tank, thus you get a lot more stored fuel with a larger rocket for the weight. Rocket engines have a maximum thrust to weight ratio when they are larger. Starship is intended to support missions to orbit even when some rockets fail. The size pretty much again comes to Starship sized. Heat shielding is a bit trickier, but I believe there is a similar minimum mass that is required to truly be effective. Essentially it has to absorb the energy of the spacecraft and dissipate it out. With a larger spacecraft, the density per area ratio is lower, allowing for more effective slowing down, and also less heat shielding required. Lastly, there was a similar thought process with Falcon 1 vs Falcon 9. It turns out that many of the costs of a launch are fixed, the flight analysis, coupled loads analysis, etc all have to be done regardless if you have a single small satellite or a huge one. Concentrating on a larger load allows those costs to be minimized. Bottom line is, it is far more efficient for a spacecraft of Starship's goals to focus purely on a spacecraft roughly the size of Starship. Robert Zubrin even mentioned this at a recent Mars Society meeting. • In particular, the surface area of fuel tanks/heat shields goes up as the square of the ship scale, but the volume goes up as the cube, which is what provides the favorable economies of larger ships. Of course, this doesn't continue indefinitely, as you become limited by materials strength. Thus, there is an optimal size. – Lawnmower Man Nov 21 '19 at 3:40 • Cube-square law does not apply to pressurized tanks. The structural mass of a tank is proportional to pressure and volume. See Pressure Vessel#Scaling – Rainer P. Nov 21 '19 at 8:49 • @RainerP. Right, but that doesn't apply very much to rockets. Unless you're using pressure-fed engines, the internal pressure in the tanks is relatively low, and the resulting loads are much less important than the axial compression due to thrust and dynamic pressure or aerodynamic bending loads. Also, with liquid propellants in the tanks, only hydrostatic pressure matters for the tanks, and that only scales with length. – TooTea Nov 21 '19 at 10:06 • Most of the arguments of this answer are flawed. – Stephane Bersier Nov 23 '19 at 23:37 • For paragraph one: in the atmosphere, smaller planes have a smaller range because of the square–cube law. This doesn't apply in space. While rockets launched from Earth do have to contend with the atmosphere initially, the energy lost to drag is already very minor for a Falcon 9. – Stephane Bersier Nov 23 '19 at 23:38 The simple answer, as already stated by others, is that small rockets do not align with Musk's goals of putting people on Mars. Additionally, Spacex already have a small (OK, medium sized) launcher in the form of Falcon 9. They claim that Starship + Falcon Superheavy will be able to undercut it. One possiblity I wouldn't disregard (if Spacex chose do go down that route in future) is to use the upper stage of Starship without a booster to launch payloads to suborbital speeds. A payload would then need its own kicker stage to get into orbit. Spacex have already revised their Earth-to-Earth passenger scheme to one without a booster so using the upper stage of Starship without a booster is not an entirely new idea. The kicker stage could be a simple solid rocket, or a reusable stage - whatever is decided in the future. I would note that when Falcon 9 is retired, Spacex will have several hundred surplus Merlin engines which could be given one last use in an expendable kicker stage. Spacex has gone for the upper size end of the market, with very few competitors, all of them expendable: SLS (projected to be ready soon) and Long March 9 and Yenisei (projected to be ready late 2020's.) They have avoided the crowded lower size end of the market. If one of the current players in that market becomes big enough to compete with Spacex, they may regret it in 20 years. But there is no sign of that happening yet. I wouldn't discount spacex developing a smaller launch system later - but it would be done in an opportunistic way on the back of the Starship program. Spacex does have one Spinoff project not directly aligned to the goal of getting to Mars, and in typical Musk style, it is unique: the Starlink Satellite constellation project. Spinoffs like this are obviously a necessary way to raise funds. • I feel like your last paragraph contradicts your first one... – Stephane Bersier Nov 23 '19 at 23:28 • @StephaneBersier How so? Musk's main goal with Spacex is to build a large rocket to take people to Mars. For that he needs both money and a way to gain experience with his rocket. One way to make money is with the starlink constellation, which is something nobody else is doing (or at least, not on anything like the same scale.) Another way would be to build a small rocket, which is something a lot of others are competing to do, therefore SpaceX have avoided this. – Level River St Nov 25 '19 at 2:22 • SpaceX is actually already planning to compete in the smallsat market with their Smallsat Program. Given the expected specs, a mini-Starship would have no trouble competing in the smallsat market while still maintaining large profit margins, serving the same role as the Falcon 9 in their current Smallsat Program plans. – Stephane Bersier Nov 25 '19 at 18:45 • @StephaneBersier Spacex's Smallsat program is a rideshare program, not a small rocket program. It's a great option if you don't care too much what orbit you're going into or when you launch, since all satellites on the launch go into virtually the same orbit on the same launch date. Small rocket operators like Rocketlab will give you a bespoke orbit and launch when you want. That's the advantage they have over rideshare, and in theory they can charge more for clients who need it. The trouble is there's a lot of other small rocket operators either up and running or coming soon. – Level River St Nov 25 '19 at 23:54 • Exactly. I didn't say it's a small rocket program. My arguments still hold: SpaceX could make money for its ultimate goal with a mini-Starship, just as it's doing with Starlink. If SpaceX can make money with its current Smallsat program using a Falcon 9, then a fortiori it could do so with a mini-Starship. – Stephane Bersier Nov 27 '19 at 0:14 Right now Starship and it Superheavy booster are both open design. Current plan is for 35 engines on 1st stage, 6 on 2st stage, 150ton to LEO ascend payload, dry mass 85t, return to Earth payload 50t. All of this could change during rocket development, because of technological challenges, but also money which will have SpaceX available during development process. They already change Starship design many times ( number of fins, TPS types, design of legs ) and they could keep changing it constantly after first test flights. At the end they can finish with much smaller, medium version of Starship with less engines, less payload to LEO, smaller gross mass. For example 20,30t to LEO Starship/Superheavy with just total of 15 Raptors, but maybe 20,30t to LEO payload, but still with 30-40 Raptors total. It depends, on what would be optimal for lowest possible cost per kilogram to LEO, GTO, which will be main goal. Figure 2 million per flight is just < aspiring goal > and shouldn't be taken as fact. For example ESA Arianespace want with expendable Ariane 6 ( payload 20t to LEO, 10t to GTO ) achieve same cost per kilogram as F9R, which is about 5000 per kilogram to LEO ( more than twice to GTO ) and with Ariane next program, which could use potentially reusable LOX/Methane engine Prometheus, improve Ariane 6 cost per kilogram further by factor of two. But that of course is not yet given.
Same as is not given that Starship/Superheavy cost per kilogram to LEO or GTO will be better than F9R, since second stage reuse is much harder than 1st stage ( both technically and economically ), and no rocket with massive size of Starship/Superheavy ( it will have see level trust 3-5 times of Saturn 5 ) was ever build, let alone try to be reused.
• The Starship design is indeed not final. However, it is to be at least a super heavy-lift launch vehicle, which excludes the possibility of it being as small as the mini-Starship described in the question. – Stephane Bersier Nov 30 '19 at 0:44 | 2020-08-15 20:20:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45151904225349426, "perplexity": 2006.4199121514737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439741154.98/warc/CC-MAIN-20200815184756-20200815214756-00408.warc.gz"} |
https://brilliant.org/practice/area-rectangles/ | ×
Back to all chapters
# Length and Area
If the area of a square is 144, what is the perimeter of the square? If a square and a circle have the same perimeter, which of them will have a greater area?
# Area - Rectangles
What is the area (in m$$^2$$) of a rectangle with side length 5 m and width 15 m?
Benny drew a square with side length 12. What is the area of the square?
Far Eastern school has a rectangular field of area 3500 m$$^2$$. If the width of the field is $$50$$ m, what is the length (in meters) of the field?
A rectangle has length 17 meters and area 136 meters$$^2$$. What is the perimeter (in meters) of the rectangle?
Calvin's monitor screen measures 8 inches by 13 inches. What is the area of his screen (in inches$$^2$$)?
× | 2017-06-24 10:19:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.648249089717865, "perplexity": 535.977853623376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320257.16/warc/CC-MAIN-20170624101204-20170624121204-00525.warc.gz"} |
https://developer.aliyun.com/article/356641 | # PostgreSQL参数学习:max_wal_senders
[作者 高健@博客园 luckyjackgao@gmail.com]
http://space.itpub.net/133735/viewspace-742081
http://www.postgresql.org/docs/9.3/static/app-pgbasebackup.html
The backup is made over a regular PostgreSQL connection, and uses the replication protocol. The connection must be made with a superuser or a user having REPLICATION permissions (see Section 20.2), and pg_hba.conf must explicitly permit the replication connection. The server must also be configured with max_wal_senders set high enough to leave at least one session available for the backup.
http://www.postgresql.org/docs/9.3/static/runtime-config-replication.html#GUC-MAX-WAL-SENDERS
max_wal_senders (integer)
Specifies the maximum number of concurrent connections from standby servers or streaming base backup clients (i.e., the maximum number of simultaneously running WAL sender processes). The default is zero, meaning replication is disabled. WAL sender processes count towards the total number of connections, so the parameter cannot be set higher than max_connections. This parameter can only be set at server start. wal_level must be set to archive or hot_standby to allow connections from standby servers.
pg_basebackup also counts against the total of receivers:
http://www.postgresql.org/docs/9.3/interactive/app-pgbasebackup.html
+ 订阅 | 2020-07-15 13:08:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5491561889648438, "perplexity": 4277.323281957842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657167808.91/warc/CC-MAIN-20200715101742-20200715131742-00260.warc.gz"} |
http://stats.stackexchange.com/questions/26623/odds-ratio-and-confidence-interval-in-meta-analysis | # Odds ratio and confidence interval in meta-analysis
I need to calculate the pooled odds ratio and associated 95% confidence interval for meta-analysis of 2 studies about the risk of bleeding. The only information I have is the odds ratios and 95% confidence intervals. They are 2.7 (1.8 – 4.0) in the first, and 1.3 (0.5 – 3.4) in the second study.
I computed the standard errors, weights, and pooled SE, OR and CI from the available ORs and CIs. According to my calculations, the standard errors are 0.204 in the first, and 0.489 in the second study, which gives the pooled OR and CI of 2.49 (1.72 – 3.60).
I’m not sure about this and I would appreciate if someone checks this up.
-
I did the following in Stata, the first is fixed effect and the second is random effect. I got different answers than you did.
Study | ES [95% Conf. Interval] % Weight
---------------------+---------------------------------------------------
1 | 2.700 1.800 4.000 63.47
2 | 1.300 0.500 3.400 36.53
---------------------+---------------------------------------------------
I-V pooled ES | 2.189 1.312 3.065 100.00
---------------------+---------------------------------------------------
Heterogeneity calculated by formula
Q = SIGMA_i{ (1/variance_i)*(effect_i - effect_pooled)^2 }
where variance_i = ((upper limit - lower limit)/(2*z))^2
Heterogeneity chi-squared = 2.27 (d.f. = 1) p = 0.132
I-squared (variation in ES attributable to heterogeneity) = 56.0%
Test of ES=0 : z= 4.89 p = 0.000
. metan or ll ul, effect(Odds Ratio) null(1) lcols(trialname) texts(200) random
Study | ES [95% Conf. Interval] % Weight
---------------------+---------------------------------------------------
1 | 2.700 1.800 4.000 55.93
2 | 1.300 0.500 3.400 44.07
---------------------+---------------------------------------------------
D+L pooled ES | 2.083 0.721 3.445 100.00
---------------------+---------------------------------------------------
Heterogeneity calculated by formula
Q = SIGMA_i{ (1/variance_i)*(effect_i - effect_pooled)^2 }
where variance_i = ((upper limit - lower limit)/(2*z))^2
Heterogeneity chi-squared = 2.27 (d.f. = 1) p = 0.132
I-squared (variation in ES attributable to heterogeneity) = 56.0%
Estimate of between-study variance Tau-squared = 0.5488
Test of ES=0 : z= 3.00 p = 0.003
-
Thank you very much. However, I’m not sure what to do now. As I calculated the numbers by myself, probably one of your results is completely accurate. Still, I’m not sure whether fixed effect or random effect is more appropriate in this case. Thanks again. – Jozo Apr 18 '12 at 18:43
I figured out that random effect should be used. Thanks again. – Jozo Apr 21 '12 at 14:51
So, did you ever get the same numbers? – pmgjones Apr 22 '12 at 17:52
Unfortunately, I have not. – Jozo Apr 30 '12 at 17:53 | 2013-12-06 19:47:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7033182978630066, "perplexity": 3310.2889743231435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052462/warc/CC-MAIN-20131204131732-00099-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/sum-of-combinations-from-k-to-n.533716/ | # Sum of combinations from k to n
I have been trying to figure out a formula for the sum of combinations. For example:
$\sum$nk=0($\frac{n}{k}$) = 2n
But what if you want to sum from any arbitrary k, like 4? I've tried looking at Pascal's triangle for nice values of n and k, but haven't been able to see a pattern. I would really appreciate any help with this. I want to apply this to combinations for large n, which are impractical to compute.
Related Set Theory, Logic, Probability, Statistics News on Phys.org
Stephen Tashi
I don't know any nice formula for $\sum_{k=0}^m \binom{n}{k}$ Your question made me curious and I searched the web. It apparently doesn't know a nice formula either. Perhaps if you give an example of the kind of computation you are trying to do, someone will see a way to compute the result - at least compute it on a computer. | 2020-10-24 04:07:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7689890265464783, "perplexity": 183.80550757484778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881640.29/warc/CC-MAIN-20201024022853-20201024052853-00454.warc.gz"} |
https://www.scienceforums.net/topic/65384-what-makes-an-electron-orbit/?tab=comments | # What makes an electron orbit?
## Recommended Posts
I have been looking at the structure of the atom lately and wondered what makes the electron orbit? You would think if a proton is positively charged and a electron is negatively charged that the two would eventually stick together. I realize in theory that the orbiting electron like a planet never lets this happen. But what makes an electron orbit in the first place and when an electron goes from one atom to another how does it automatically orbit in a way that it does not collide with the proton or neutron? I am not interested in theories (there are way to many of them flying around), but in proof + experiments on what is going on.
Thanks.
##### Share on other sites
I have been looking at the structure of the atom lately and wondered what makes the electron orbit? You would think if a proton is positively charged and a electron is negatively charged that the two would eventually stick together. I realize in theory that the orbiting electron like a planet never lets this happen. But what makes an electron orbit in the first place and when an electron goes from one atom to another how does it automatically orbit in a way that it does not collide with the proton or neutron? I am not interested in theories (there are way to many of them flying around), but in proof + experiments on what is going on.
Thanks.
This is one of the major problems with the Bohr model of the atom and was part of the argument that lead to the adoption of the quantum mechanical model of the atom.
In the modern model, the electron is not considered to be a little ball orbiting the middle, instead it is a field that takes up the entire region and the shape, size and density of the field is determined by how much energy and angular momentum (and some other things) the field has.
##### Share on other sites
As you mentioned, an atom has a central heavy part, or nucleus, around which electrons orbit, like the planets around the sun.
That's the Bohr model of the atom, which is obsolete. Current thinking has the electron existing as indeterminate cloud, it's 'position' given by probability and the electron's energy.
##### Share on other sites
I have been looking at the structure of the atom lately and wondered what makes the electron orbit? You would think if a proton is positively charged and a electron is negatively charged that the two would eventually stick together. I realize in theory that the orbiting electron like a planet never lets this happen. But what makes an electron orbit in the first place and when an electron goes from one atom to another how does it automatically orbit in a way that it does not collide with the proton or neutron? I am not interested in theories (there are way to many of them flying around), but in proof + experiments on what is going on.
Thanks.
It doesn't. Orbits were part of the failed Bohr model, but the correct concepts in it included quantization of angular momentum and energy. An electron in the ground state cannot have any less energy — there is no lower state available.
In the QM model the electrons spend some amount of their time in the nucleus. This is experimentally consistent with e.g. the difference in hyperfine splitting between S states and P states, which have differing amounts of overlap with the nucleus. Also with the occurrences of electron capture decays.
##### Share on other sites
I have been looking at the structure of the atom lately and wondered what makes the electron orbit? You would think if a proton is positively charged and a electron is negatively charged that the two would eventually stick together. I realize in theory that the orbiting electron like a planet never lets this happen. But what makes an electron orbit in the first place and when an electron goes from one atom to another how does it automatically orbit in a way that it does not collide with the proton or neutron? I am not interested in theories (there are way to many of them flying around), but in proof + experiments on what is going on.
Thanks.
If you are "not interested in theories" then you are not interested in any answer really, because any "proof" or "experiment" needs to be interpreted within the framework of some theory. Without theories we do not can even define what is an "electron", what is an "orbit", what means "an electron goes from one atom to another", what is a "collision"...
The concepts of electron in Maxwell-Lorentz theory, Bohr theory, Lewis theory, quantum mechanical theory, Dirac theory, quantum electrodynamics theory are different. For instance, an electron in Dirac theory has a property named spin, an electron in in Maxwell-Lorentz theory do not have spin. An electron in Bohr theory belong to a given atom, and electron in Lewis theory can belong to two atoms at once, etcetera.
Edited by juanrga
##### Share on other sites
but the mods here is oppressive so...
I would tell the topic starter to go to... url deleted
Where.. i discuss why... why... why.. electrons orbit.... why atoms form at all.
-Mosheh Thezion
##### Share on other sites
but the mods here is oppressive so...
I would tell the topic starter to go to...
Where.. i discuss why... why... why.. electrons orbit.... why atoms form at all.
-Mosheh Thezion
!
Moderator Note
Yes, we have this thing about the rules, which you agreed to follow when you joined. One of them is about not hijacking threads by advertising pet theories.
##### Share on other sites
you... bascially are saying... I CANNOT DISCUSS THE TOPIC.. OR SHARE A THOUGHT... UNLESS I DO IT ON ONE THREAD.
I am doing that.... i did that.. I DID NOT HIJACK ANYTHING.
IN FACT... I WENT OUT OF MY WAY... TO AVOID HIJACKING IT.
I PROVIDED THE LINK.. so any interest person could go there if they wanted to...
this kind of fanatical behavior will drive people away from your site.
gesh.. this is disgusting.
-Mosheh Thezion
##### Share on other sites
you... bascially are saying... I CANNOT DISCUSS THE TOPIC.. OR SHARE A THOUGHT... UNLESS I DO IT ON ONE THREAD.
!
Moderator Note
Yes, precisely. If you follow the rules, you discuss speculations in one thread, and do not pollute other discussions with it.
Further, you do not drag discussions off-topic by discussing moderator warning. Do not respond to this.
##### Share on other sites
I have been looking at the structure of the atom lately and wondered what makes the electron orbit? You would think if a proton is positively charged and a electron is negatively charged that the two would eventually stick together. I realize in theory that the orbiting electron like a planet never lets this happen. But what makes an electron orbit in the first place and when an electron goes from one atom to another how does it automatically orbit in a way that it does not collide with the proton or neutron? I am not interested in theories (there are way to many of them flying around), but in proof + experiments on what is going on.
Thanks.
An electron doesn't tend to "orbit" around a nucleus, it tends to "vibrate" around a nucleus, much like a wave. There's also something you have to understand about correlation, which is that in quantum mechanics, things can happen just because they logically should happen.
For instance, if I say an electron travels at 2 miles per second, it will take 1 second for it to travel two mile, but if I say "when the electron's energy equals 3, it's distance form the nucleus will equal 20nm", in the second example, there is no "time" that it takes in order for 3 to = 20nm to be a true statement, the electron's probability is just "equal" to that distance which makes it an instantaneous process, and this is the difference between correlation and causation and also why quantum mechanics itself doesn't violate relativity.
So if we use this to look at why an electron doesn't fall or vibrate into the nucleus, it's because at the electron's lowest possible energy, it's probability isn't equal to anything in the nucleus. An electron's probability of being in the nucleus = 0, and thus the electron doesn't ever fall into it.
Edited by questionposter
##### Share on other sites
So if we use this to look at why an electron doesn't fall or vibrate into the nucleus, it's because at the electron's lowest possible energy, it's probability isn't equal to anything in the nucleus. An electron's probability of being in the nucleus = 0, and thus the electron doesn't ever fall into it.
Electrons can be in the nucleus, they just don't stay there. The amount of overlap of the electron wave function with the nucleus dictates the amount of hyperfine splitting and the probability of electron capture reactions.
##### Share on other sites
Electrons can be in the nucleus, they just don't stay there. The amount of overlap of the electron wave function with the nucleus dictates the amount of hyperfine splitting and the probability of electron capture reactions.
So it can go in the nucleus, just not for a long enough time to do anything? But isn't there a nodal surface in the nucleus?
##### Share on other sites
So it can go in the nucleus, just not for a long enough time to do anything? But isn't there a nodal surface in the nucleus?
Soem orbitals have a wave function that goes to zero for r=0, but not all. e.g. Hydrogen's 1s orbital varies as $e^{-r/a}$. Even for those that do go to zero, you have to recognize that the nucleus has a spatial extent, so the wave function going to 0 at r=0 is not the same as there being no probability of being in the nucleus.
##### Share on other sites
Soem orbitals have a wave function that goes to zero for r=0, but not all. e.g. Hydrogen's 1s orbital varies as $e^{-r/a}$. Even for those that do go to zero, you have to recognize that the nucleus has a spatial extent, so the wave function going to 0 at r=0 is not the same as there being no probability of being in the nucleus.
The 1s orbital, as other Hydrogen-like orbitals, assumes a point-like nucleus and does not give a correct description for small r. When those orbitals are corrected, by accounting for the finite size of the nucleus, then they no longer go to zero for r=0.
Edited by juanrga
##### Share on other sites
Soem orbitals have a wave function that goes to zero for r=0, but not all. e.g. Hydrogen's 1s orbital varies as $e^{-r/a}$. Even for those that do go to zero, you have to recognize that the nucleus has a spatial extent, so the wave function going to 0 at r=0 is not the same as there being no probability of being in the nucleus.
So I suppose its often distance from the center of the nucleus, and not its surface.
I guess a better answer then isn't purely just because an electron's probability never reaches 0, but because the oscillation patterns of the electron and the proton different enough that even when an electron is in the nucleus their physical probabilities don't overlap.
Edited by questionposter
##### Share on other sites
I guess a better answer then isn't purely just because an electron's probability never reaches 0, but because the oscillation patterns of the electron and the proton different enough that even when an electron is in the nucleus their physical probabilities don't overlap.
I wouldn't say that, but then I don't know what a physical probability is, nor does "oscillation pattern" have any meaning to me in this context.
##### Share on other sites
I wouldn't say that, but then I don't know what a physical probability is, nor does "oscillation pattern" have any meaning to me in this context.
When I say oscillation patterns, I more or less mean the evolution of their wave functions over time, or how I can actually model the physical dimensional coordinates an electron is likely to occupy over time, and with all the quantinization you have the specific orbitals that they have, that's why nodal surfaces are generated.
Physical probability is any real probability greater than 0. It seems only one way it makes sense that electrons and protons don't combine if in fact they do come into contact with each other is because they don't actually come into contact because they oscillations just never line up the right way. It might have to do with some extra-dimensional mani-fold physics and time symmetry, because in mere 4 dimensional space, an electron should combine with a proton if it ever came into contact with it, since in 4 dimensional space, if I just run 2 waves into each other in a 3 dimensional tank of water, they are going to hit each other.
Unless maybe there is some kind of weird ionization energy except with combining? But I don't see why you'd need that.
Edited by questionposter
##### Share on other sites
When I say oscillation patterns, I more or less mean the evolution of their wave functions over time
Why does the wave function for an atom have to evolve over time.
Electrons and protons don't often combine because it requires a weak interaction, and that has a very short range and small cross-section.
##### Share on other sites
Electrons and protons don't often combine because it requires a weak interaction, and that has a very short range and small cross-section.
What do you mean it requires a "weak interaction"? What are particle colliders doing that a normal atoms doesn't? If anything, colliders add more energy, unless you mean the "weak force", in which case, what does the weak force have to do with particles combining? Wouldn't the strong force have more to do with that?
It would make sense if there was some kind of ionization energy for electron-proton, but so far I haven't found anything like that, and I also don't know specifically why that stops the particles from combining. Perhaps the quarks are held so tightly together that it takes massive amounts of kinetic energy to break them and then for an electron to combine with them?
What about in the sun? To electrons fuse into protons in the sun?
Why does the wave function for an atom have to evolve over time.
I was thinking because electrons are waves, they physically exist, and according to the math that describes it, they have physical oscillation.
Edited by questionposter
##### Share on other sites
To describe an atom you can use stationary state solutions to the Schroedinger equation. You can neglect the time dependence of the wavefunction if you aren't interested in any dynamic stuff.
The hydrogen solution has r, theta, and phi but no t.
##### Share on other sites
To describe an atom you can use stationary state solutions to the Schroedinger equation. You can neglect the time dependence of the wavefunction if you aren't interested in any dynamic stuff.
The hydrogen solution has r, theta, and phi but no t.
So I guess that only leaves this weird ionization energy thing, where some specific energy in some specific way is required to make the interaction happen, but why does that actually need to happen then?
##### Share on other sites
So I guess that only leaves this weird ionization energy thing, where some specific energy in some specific way is required to make the interaction happen, but why does that actually need to happen then?
Why does ionization need to happen? I don't think I understand the question.
##### Share on other sites
Why does ionization need to happen? I don't think I understand the question.
It seems there needs to be some kind of minimum energy action in order to trigger the combining of a proton and electron, because it doesn't appear to happen on it's own, but I don't know why it doesn't happen on it's own if electrons and protons do overlap. Perhaps maybe they just don't overlap "enough", like technically their existence does extend infinitely, but that doesn't mean everything is entangled when your not looking at it, and a particle collider provides the minimum energy necessary to force an electron into a proton enough for them to combine, but what's the actual minimum distance for that?
Edited by questionposter
##### Share on other sites
Oh. Usually "ionization energy" refers to the energy it takes to ionize a given electron from an atom. So I thought you were talking about something entirely different.
I know nothing about electron capture.
##### Share on other sites
Oh. Usually "ionization energy" refers to the energy it takes to ionize a given electron from an atom. So I thought you were talking about something entirely different.
I know nothing about electron capture.
I know ionization energy isn't the right term, but like I said I don't know what the right term is, so I don't know what else to call it really.
##### Share on other sites
This topic is now closed to further replies.
× | 2020-06-06 20:46:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48453280329704285, "perplexity": 511.0311050932316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348519531.94/warc/CC-MAIN-20200606190934-20200606220934-00560.warc.gz"} |
https://renku.readthedocs.io/en/latest/user/interactive_basics.html | # Interactive Environment Basics¶
## What is an Interactive Environment?¶
Interactive environments on RenkuLab are web-based user interfaces (like JupyterLab and RStudio) that you can launch to develop and run your code and data workflows. They’re commonly used for exploratory analysis because you can try out short blocks of code before combining everything into a (reproducible) workflow.
You can run JupyterLab or RStudio within a project independently from RenkuLab, but RenkuLab offers the following advantages:
• Environments hosted in the cloud with a configurable amount of resources (memory, CPU, and sometimes GPU).
• Environments are defined using Docker, so they can be shared and reproducibly re-created.
• Auto-saving of work back to RenkuLab, so you can recover when your environment is shut down (this happens automatically after 24 hours of inactivity).
• A git client pre-configured with your credentials to easily push your changes back to the server.
• The functionality provided by the renku-python command-line interface (CLI) is automatically available.
## What’s in my Interactive Environment?¶
• Your project, which is cloned into the environment on startup.
• Your data files (if the option Automatically fetch LFS data is selected) that are stored in git LFS*.
• All the software required to launch the environment and common tools for working with code (git, git LFS, vim, etc.).
• Any dependencies you specified via conda (environment.yml), using language-specific dependency-management facilities (requirements.txt, install.R, etc.) or installed in the Dockerfile. An exception to this is if project sets a specific image.
• The renku command-line interface renku-python.
• The amount of CPUs, memory, and (possibly) GPUs that you configured before launch.
For adding or changing software installed into your project’s interactive environment, check out Customizing interactive environments
## Which Interactive Environment will launch?¶
The template you choose when you create a project on RenkuLab (or locally call renku init on your project) determines the kind of interactive environment that is available to launch. Once it is initialized, your project can easily be modified, for example to install additional libraries into the environment - see Customizing interactive environments. We provide templates for basic Python, R, and Julia projects. If you wish to use custom templates for your projects, you can build your own! Please refer to the templating documentation.
## Starting a new Interactive Environment¶
When starting a new interactive environment, you will be asked to configure it. The default configuration should work well for most situations. If, however, you encountered problems with an environment (for example, a crash), you might want to increase some processing power or memory. Here’s the rundown of the configuration options.
Option Description
Branch Default is master. You can switch if you are working on another branch
Commit Default is the latest, but you can launch the environment from an earlier commit. This is especially useful if your latest commit’s build failed (see below) or you have unsaved work that was automatically recovered.
Default Image This provides information about the Docker image used by the Interactive Environment. When it fails, you can try to rebuild it, or you can check the GitLab job logs. An image can also be pinned so that new commits will not require a new image each time.
Default environment Default is /lab, it loads the JupyterLab interface. If you are working with R, you may want to use /rstudio for RStudio. Mind that the corresponding packages need to be installed in the image. If you’re using a python template, the rstudio endpoint will not work.
Number of CPUs The number of CPUs available, or the quota. Resources are shared, so please select the lowest amount that will work for your use case. Usually, the default value works well.
Amount of Memory The amount of RAM available. Resources are shared, so please select the lowest amount that will work for your use case. Usually, the default value works well.
Number of GPUs The number of GPUs available. If you can’t select any number, no GPUs are available in RenkuLab deployment you are using. If you request any, you might need to wait for GPUs to free up in order to be able to launch an environment.
Automatically fetch LFS data Default is off. All the lfs data will be automatically fetched in if turned on. This is convenient, but it may considerably slow down the start time if the project contains a lot of data. Refer to Data in Renku for further information
## What if the Docker image is not available?¶
Interactive environments are backed by Docker images. When launching a new interactive environment, a container is created from the image that matches the selected branch and commit.
A GitLab’s CI/CD pipeline automatically builds a new image using the project’s Dockerfile when any of the following happens:
• Creating of a project.
• Forking a project (in which the new build happens for the fork).
• Pushing changes to the project.
The pipeline is defined in the project’s .gitlab-ci.yml file. If the project references a specific image to use for all environments, the UI will not check for the image availability - that is usually provided by the project’s maintainer and it doesn’t change at every new commit.
It may take a long time to build an image for various reasons, but if you’ve just created the project on RenkuLab from one of the templates, it generally takes less than a minute or two.
### The Docker image is still building¶
If the Docker image has a “still building” message, you can either wait patiently, or watch it build by clicking the associated link to see the streaming log messages on GitLab. This can be useful if you’ve made changes to the Dockerfile or added lines to requirements.txt, environment.yml, or install.R, where something might have gone wrong.
### The Docker image build failed¶
If this happens, it’s best to click the link to view the logs on GitLab so you can see what happened. Here are some common reasons for build failure:
#### Software installation failure¶
Problem: You added a new software library to requirements.txt, environment.yml, or install.R, but something was wrong with the installation (e.g. typo in the name, extra dependencies required for the library but unavailable).
How to fix this: You can use the GitLab editor or clone your project locally to fix the installation, possibly by adding the extra dependencies it asks for into the Dockerfile (the commented out section in the file explains how to do this). As an alternative, you can start an interactive environment from an earlier commit.
How to avoid this: First try installing into your running interactive environment, e.g. by running pip install -r requirements.txt in the terminal on JupyterLab. You might not have needed to install extra dependencies when installing on your local machine, but the operating system (OS) defined in the Dockerfile has minimal dependencies to keep it lightweight.
#### The build timed out¶
By default, image builds are configured to time out after an hour. If your build takes longer than that, you might want to check out the section on Customizing interactive environments interactive environments before increasing the timeout.
#### Your project could not be cloned¶
If you accidentally added 100s of MBs or GBs of data to your repo and didn’t specify that it should be stored in git LFS, it might take too long to clone. In this case, read the docs on how to rewrite history and move these files into git LFS.
Another potential cause is if the project has submodules that are private.
### The Docker image is not available¶
RenkuLab uses its internal instance of GitLab to build and store an image in the registry each time you create a project, push changes, or use the RenkuLab UI to fork a project. Thus, if you manage to get into a state that skips any of these steps, the image might be unavailable. It’s a workaround, but the easiest way to get out of this state is to manually trigger a build by adding a new trivial commit through the GitLab instance, like editing the README.md file. | 2021-01-19 16:00:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2070494294166565, "perplexity": 2479.145162968607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519395.23/warc/CC-MAIN-20210119135001-20210119165001-00749.warc.gz"} |
http://bootmath.com/higher-order-derivatives-in-manifolds.html | # Higher-order derivatives in manifolds
If $E, F$ are real finite dimensional vector spaces and $\mu\colon E \to F$, we can speak of a (total) derivative of $\mu$ in Fréchet sense: $D\mu$, if it exists, is the unique mapping from $E$ to $L(E; F)$, the vector space of linear $E\to F$ mappings, such that for all $x, x_0\in E$ we have
$$\mu(x)=\mu(x_0)+D\mu(x_0)(x-x_0)+o(\lvert x-x_0\rvert).$$
Now since $L(E;F)$ is a vector space itself, the construction can be iterated yielding higher-order derivatives $D^2\mu=D(D\mu), D^3\mu=D(D^2\mu)\ldots$
The concept of first derivative extends to maps $\mu\colon M \to N$ with $M, N$ smooth manifolds, in which case $D\mu\colon TM \to TN$ is defined by
$$D\mu(X_p)(f)=X_p(\mu \circ f), \qquad \forall p \in M,\ \forall X_p \in T_pM,\ \forall f \in C^\infty(N).$$
Question. What about second derivatives? How to generalize the above construction from vector spaces to smooth manifolds?
The obvious way, that of taking $D^2\mu=D(D\mu)$, seems a bit awkward because it involves the complicated tangent-bundle-of-tangent-bundle $T(TM)$. Also, if $\mu=f \colon M \to \mathbb{R}$, I would expect the definition to boil down to
$$D^2f(X_p, Y_p)=(X_pY_p)(f), \qquad \forall p \in M, \forall X_p, Y_p \in T_pM.$$
I would use some reference material on this question and its applications.
Thank you. | 2018-07-16 07:06:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9458624124526978, "perplexity": 182.0248007667975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589222.18/warc/CC-MAIN-20180716060836-20180716080836-00241.warc.gz"} |
https://mathleaks.com/study/analyzing_one-variable_relationships_in_context/grade-2-7/solution | Expand menu menu_open Minimize Go to startpage home Home History history History expand_more
{{ item.displayTitle }}
navigate_next
No history yet!
Progress & Statistics equalizer Progress expand_more
Student
navigate_next
Teacher
navigate_next
{{ filterOption.label }}
{{ item.displayTitle }}
{{ item.subject.displayTitle }}
arrow_forward
{{ searchError }}
search
{{ courseTrack.displayTitle }} {{ printedBook.courseTrack.name }} {{ printedBook.name }}
# Analyzing One-Variable Relationships in Context
Equations can be used to represent real-world relationships. When the quantity a variable represents is known, solving the equation makes it possible to determine unknown information. To create an equation, use the relationship between given quantities.
Method
## Problem-Solving using Modeling
Equations that represent real relationships are called mathematical models. What follows is one method of using mathematical models to solve problems.
Suppose a taxi ride from the airport to downtown costs $$46.37.$ Suppose also that it costs$$4.85$ to ride in the taxi, and an additional $\1.73$ per mile traveled. Calculate the distance of the ride using the following method.
### 1
Make sense of given information
First, it can be helpful to highlight the information given about the situation.
• The total cost for the taxi ride is $$46.37.$ • The cost per mile traveled is$$1.73.$
• There is a starting fee of \$$4.85.$
### 2
Define variable
A variable can be used to represent the unknown quantity in the situation.
Here, the unknown quantity is the length of the ride. Thus, the variable $m$ will be used to represent the number of miles traveled.
### 3
Relate quantities
Next, it is necessary to understand how the different quantities in the problem relate.
The ${\color{#0000FF}{\text{total cost}}}$ includes the ${\color{#009600}{\text{starting fee}}}$ and the cost of the miles traveled. Additionally, the cost of the miles traveled can be found by multiplying the ${\color{#FF0000}{\text{cost per mile}}}$ by the ${\textcolor{purple}{\text{distance traveled}}}.$ As a verbal equation, this relationship can be expressed as follows. ${\color{#0000FF}{\text{total cost}}} = \text{{\color{#009600}{starting fee}}} + {\color{#FF0000}{\text{cost per mile}}} \cdot \textcolor{purple}{\text{distance}}$
### 4
Create equation
Creating the equation involves translating the relationship from Step 3 into symbols. To do this, replace each quantity with the corresponding value.
For this situation, the following equation can be written. \begin{aligned} {\color{#0000FF}{\text{total cost}}}\ =& \ \ \text{{\color{#009600}{starting fee}}} + {\color{#FF0000}{\text{cost per mile}}} \cdot \textcolor{purple}{\text{distance}}\\ &{\color{#0000FF}{46.37}}={\color{#009600}{4.85}}+{\color{#FF0000}{1.73}}\cdot \textcolor{purple}{m} \end{aligned}
### 5
Solve equation
Solve the created equation to determine the unknown quantity.
$46.37=4.85+1.73m$
$46.37-4.85=4.85+1.73m-4.85$
$41.52=1.73m$
$\dfrac{41.52}{1.73}=\dfrac{1.73m}{1.73}$
$24=m$
$m=24$
The equation has the solution $x=24.$ Thus, the distance traveled was $24$ miles.
Exercise
Given a perimeter of $23$ feet, what is the measure of the longest side of the triangle?
Solution
To begin, let's make sense of the given information. The perimeter of the triangle is $23$ feet, and the side lengths of the triangle are $5, \quad (x+3), \quad \text{and} \quad (3x-1).$ The perimeter of a polygon is the sum of all its side lengths. Therefore, we can equate the sum of the given lengths with $23$ feet. This gives the following equation. $5 + (x + 3) + (3x - 1) = 23$ Solving this equation gives us the value of $x,$ which will help us find the longest side. We'll start by combining like terms.
$5 + x + 3 + 3x - 1 = 23$
$x + 3x + 5 + 3 - 1 = 23$
$4x + 7 = 23$
From here, inverse operations can be used to isolate $x.$
$4x + 7 = 23$
$4x + 7 - 7 = 23 - 7$
$4x = 16$
$\dfrac{4x}{4} = \dfrac{16}{4}$
$x = 4$
Thus, $x=4$ feet. By substituting $x$ for $4$ in the expressions for the unknown side lengths we can find their measures.
\begin{aligned} x+3 &\Leftrightarrow {\color{#0000FF}{4}}+3 = 7 \\ 3x-1 &\Leftrightarrow 3 \cdot {\color{#0000FF}{4}}-1 = 11 \end{aligned}
The side lengths of the triangle are $5,$ $7,$ and $11$ feet.
Therefore, the longest side in the triangle is $11$ feet long.
info Show solution Show solution | 2020-10-31 14:27:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 48, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8523278832435608, "perplexity": 1090.3615529496294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107918164.98/warc/CC-MAIN-20201031121940-20201031151940-00332.warc.gz"} |
https://tex.stackexchange.com/questions/184173/backward-compatibility-of-beamer-problem-keyval-error | # Backward compatibility of Beamer problem - Keyval error
I'm stuck with a problem for compiling a "not so old" beamer document I've made last year, and it seems that there is a big problem of backward compatibility between my new version of beamer (3.33) and the one I used before (3.24). here is a minimalist example that compile with the older version and not the newer :
\documentclass{beamer}
\begin{document}
\begin{frame}
\tableofcontents[currentsection,othersections,hideothersubsections,hidesubsections]
\end{frame}
\begin{frame}[margin=0pt]
\end{frame}
\end{document}
here are the relevant output of the compiler :
the older :
This is pdfTeX, Version 3.1415926-2.5-1.40.14 (TeX Live 2013/Debian)
LaTeX2e <2011/06/27>
Babel <3.9f> and hyphenation patterns for 6 languages loaded.
Document Class: beamer 2012/10/15 development version 3.24 A class for typesett ing presentations (rcs-revision 24853e6b98cf)
This is pdfTeX, Version 3.1415926-2.5-1.40.14 (TeX Live 2013/Debian)
[...]
LaTeX2e <2011/06/27> Babel <3.9k> and hyphenation patterns for 6 languages loaded.
[...]
Document Class: beamer 2013/12/02 3.33 A class for typesetting presentations (rcs-revision 332bfd3ce558)
[...]
! Package keyval Error: othersections undefined.
[...]
! Package keyval Error: margin undefined.
There are a other keys that trigger exactly the same kind of error (bg, fg). Is it a known problem ? I could'nt find relevant answer on the web, and would welcome any solution other than keep compiling on my old computer or rewrite my presentations. Maybe a package to load, or somthing like that ? or a magic incantation ?
• Out of curiosity: What is the effect of this othersections option? – user36296 Jun 10 '14 at 14:53
\documentclass[unknownkeysallowed]{beamer} | 2020-01-18 01:25:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7998583912849426, "perplexity": 6894.549645544691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591431.4/warc/CC-MAIN-20200117234621-20200118022621-00152.warc.gz"} |
https://academic.oup.com/beheco/article-lookup/doi/10.1093/beheco/ark010 | ## Abstract
Bright body colorations of orb-weaving spiders have been hypothesized to be attractive to insects and thus function to increase foraging success. However, the color signals of these spiders are also considered to be similar to those of the vegetation background, and thus the colorations function to camouflage the spiders. In this study, we evaluated these 2 hypotheses by field experiments and by quantifying the spiders' visibility to insects. We first compared the insect interception rates of orbs constructed by the orchid spider, Leucauge magnifica , with and without the spider. Orbs with spiders intercepted significantly more insects than orbs without. Such a result supported the prey attraction but not the camouflaging hypothesis. We then tested whether bright body colorations were responsible for L. magnifica 's attractiveness to insects by manipulating the spiders' color signals with paint. Alteration of color signals significantly reduced L. magnifica 's insect interception and consumption rates, indicating that these spiders' bright body parts were attractive to insects. Congruent with the finding of field manipulations were the color contrasts of various body parts of these spiders. When viewed against the vegetation background, the green body parts were lower, but the bright parts were significantly higher than the discrimination threshold. Results of this study thus provide direct evidence that bright body colorations of orb weavers function as visual lures to attract insects.
Brightly colored animals have fascinated many researchers and have been the subject of numerous studies. The studies of animals' bright coloration can be broadly categorized as intraspecific and interspecific. Studies of animal coloration in the context of intraspecific interactions have mostly focused on behavioral or morphological traits relevant to sexual selection, such as species identification ( Rutowski 1988 ), mate preference ( Petrie and Halliday 1994 ; Andersson and Amundsen 1997 ; Johnsen et al. 1998 ; Grether 2000 ; Rodd et al. 2002 ), and mate quality assessment ( McGraw and Hill 2000 ; Doucet and Montgomerie 2003 ; MacDougall and Montgomerie 2003 ). Most studies in the context of interspecific interactions have focused on antipredation adaptations such as aposematism, crypsis, or mimicry ( Stuart-Fox et al. 2003 ; Ruxton et al. 2004 ). To date, there have been few direct empirical tests of the role bright body colorations play in the context of foraging ( Craig and Ebert 1994 ; Hauber 2002 ; Tso et al. 2002 , 2004 ). In this study, we assessed how bright body coloration is involved in the prey capture of spiders, the most abundant invertebrate predators in the terrestrial ecosystem ( Wise 1993 ; Nyffeler 2000 ).
Various diurnal orb-weaving spiders exhibit brightly colored markings on their body surface, and the roles of these colorations are still under debate. Many spiders hunt nocturnally, and their colorations are usually dark, gray or brown, to reduce the spiders' visibility during daytime ( Oxford and Gillespie 1998 ). However, some orb-weaving spiders of the families Araneidae and Tetragnathidae forage actively during the day, and many of them exhibit conspicuous color patterns ( Yaginuma 1986 ). One group of researchers regarded the bright color patterns of these diurnal orb-weaving spiders as a function to increase foraging success by providing attractive visual signals to prey. For example, the brightly colored dorsum of Argiope argentata of Panama was demonstrated to be more attractive to insects than the spiders' brown ventrum ( Craig and Ebert 1994 ). The spiny spiders, Gasteracantha fornicata , of Australia also exhibit bright coloration on their dorsum. Covering this coloration with paint significantly reduced the spiders' foraging success ( Hauber 2002 ). The brightly colored giant wood spider, Nephila pilipes , of Asia caught significantly more insects than its melanic conspecifics ( Tso et al. 2002 ). Tso et al. (2004) examined how these 2 morphs of N. pilipes were seen by hymenopteran insects by calculating the color contrasts of various body parts against the vegetation background. They found the bright color bands of N. pilipes to be highly visible to hymenopteran insects, and they regarded this to be the reason for the attractiveness of the typical morph.
The camouflaging hypothesis, on the other hand, regards the bright coloration of orb-weaving spiders as functioning to conceal the spiders against the vegetation background. This hypothesis proposes that because the reflectance spectra of the spiders' body surface are similar to those of the background vegetation, the spiders are not easily perceived by insects. The vegetation background in which these spiders build their webs is usually a complex mosaic consisting of green vegetation, fallen leaves, and bark exhibiting complex UV signals ( Blackledge 1998 ; Zschokke 2002 ). Because the bright body colorations of spiders also reflect UV, spiders may blend well with the vegetation background and thus are difficult to detect by their prey or predators. Although the functions of the crab spider body coloration had been demonstrated to be either attracting prey ( Heiling et al. 2003 , 2005 ) or concealing the spiders ( Chittka 2001 ; Théry and Casas 2002 ), to our knowledge, there is no empirical study to simultaneously test these 2 alternative explanations. Evidence from several studies has shown that altering the color signals of orb-weaving spiders reduced their insect-catching rate ( Craig and Ebert 1994 ; Hauber 2002 ), and therefore, this seems to provide direct support for the prey attraction hypothesis. However, such results could also be interpreted as being congruent with the camouflaging hypothesis because the alteration of body coloration in the treatment might have destroyed the camouflaging pattern, thus rendering the spider more visible against the background and therefore lowering the insect-catching rate. Therefore, to test these 2 alternative hypotheses, it is not sufficient to merely compare the insect interception rates between the bright orb-weaving spiders and their color-manipulated conspecifics. Rather, a comparison in insect interceptions between orbs with or without spiders is needed. If the bright coloration of spiders serves as camouflaging device, then orbs with or without spiders will have similar insect interception rates. On the other hand, if the body coloration serves as an attractant, then orbs with spiders will intercept more insects than those without spiders.
In this study, we evaluated the prey attraction and camouflaging functions of bright body coloration of the orchid spider, Leucauge magnifica , by conducting field experiments and by quantifying their visibility to insects. Firstly, we manipulated the presence of spiders on webs to see whether such treatment would affect the insect interception rates. Secondly, we manipulated the color signals of orchid spiders to see whether their coloration is responsible for their attractiveness. Finally, we quantified how orchid spiders were seen by insects. The color contrasts of various body parts of orchid spiders against vegetation backgrounds were calculated by the color hexagon model of Chittka (1992) to assess whether these brightly colored spiders were visible to their prey.
## METHODS
### The study site and the spider
Field manipulative studies were conducted in the summers of 2004 and 2005 at Lien-Hwa-Chih Research Center operated by the Taiwan Forestry Research Institute in Yu-Chi, Nantou County, Taiwan. The study site consisted of a mixture of primary broadleaf forests and Taiwanese fir plantations. A stable population of orchid spiders, L. magnifica (Araneae: Tetragnathidae), was found in the neighborhood of the research center. Orchid spiders construct horizontal webs on herbaceous plants along the margin of trails in the study site throughout the year. The prosoma and legs of orchid spiders are green, but their opithosoma are brightly colored. The dorsum is silver with thin longitudinal black stripes ( Figure 1A ). On the ventrum are 2 distinct yellow stripes embedded in a dark green area ( Figure 1B ). In this study, only female orchid spiders were used because their body coloration is brighter and they forage much more actively compared with males (I-M Tso, personal observations).
Figure 1
Dorsal (A) and ventral (B) views of the female orchid spider, Leucauge magnifica , showing various brightly colored body parts. The scale bars are 5 mm. (A) 1, green legs; 2, green prosoma; 3, silver dorsum; 4, black longitudinal stripes. (B) 1, green coax; 2, black sterna; 3, yellow stripes; 4, dark green ventrum.
Figure 1
Dorsal (A) and ventral (B) views of the female orchid spider, Leucauge magnifica , showing various brightly colored body parts. The scale bars are 5 mm. (A) 1, green legs; 2, green prosoma; 3, silver dorsum; 4, black longitudinal stripes. (B) 1, green coax; 2, black sterna; 3, yellow stripes; 4, dark green ventrum.
### Testing the effect of spiders on prey interception
In this part of the study, we evaluated whether the presence of an orchid spider will affect the prey interception rates of the web. Each day before the experiment, we randomly assigned spiders into 2 groups, experimental and control. In the experimental group, the spiders were carefully removed from the webs, and in the control group, the spiders were left on the webs. Spider body length, hub diameter, orb radius from 4 cardinal directions, and number of radii were measured to the nearest millimeter with a digital caliper. The catching area of the orb was estimated by the formula in Herberstein and Tso (2000) . The prey interception rates (number of insects hitting the web per hour) were measured by video cameras. Ten video cameras were set up in the study site, 5 in each group. We placed the video cameras 2 m away and made recordings with an angle of 45° to the left or right side of the webs (depending on the microhabitat nearby). The recordings were conducted daily from 06:00 AM to 02:00 PM between 1 and 6 April 2005. Prey interception data were estimated by averaging the number of prey intercepted by webs during 8 h of monitoring. We defined an interception event as prey bumping into the web and being entangled for at least 5 s. Prey that passed through webs without touching the silk was not included in the analyses. The insect interception data set fitted well with the Poisson distribution (Pearson χ 2 test, P = 0.4196) ( Steel et al. 1997 ). Therefore, we used the Poisson regression to examine the relationship between prey interception rate, orb area, and the presence/absence of spiders. In this analysis, the probability of events (such as insect interceptions) under various conditions (such as different treatments or orbs of different area) was compared. An iterative reweighted least squares method was used to obtain the maximum likelihood estimate of the ratio between probabilities of different events. A χ 2 test was then used to evaluate whether such ratio (the difference) between probabilities of events reached statistical significance ( Steel et al. 1997 ). The Poisson model is
$\mathrm{log}\mathrm{{\mu}}_{N}{=}\mathrm{log}N(X_{i}){+}X_{i}\mathrm{{\beta}},$
where μ is the expected value, X represents the explanatory variables (spider presence/absence or orb area), β is the probability, and N ( X ) is the total number of individuals. The web area was designated as a categorical variable due to a small sample size. We ranked web areas into the following 3 categories: <100, 100–200, and 200–300 cm 2 .
### Testing the effect of spider body coloration on prey interception and consumption
In this part of the study, we evaluated whether altering the color signals of the orchid spiders would affect their prey interception as well as consumption rates. Each day before the experiment, female spiders were assigned into 4 groups. In the first group, the dorsal silver bands of the spider were covered with green paint of known reflection wavelength ( Figure 4F ). In the second group, the green paint was applied on the ventral yellow stripes. In the third group, the green paint was applied on both the dorsal and ventral sides of the spiders. In the fourth group, the control group, the green paint was applied to the green parts of the abdomen (the areas between the silver dorsum and yellow stripes) to serve as a control. Spider body length, hub diameter, orb radius from 4 cardinal directions, and number of radii were measured to the nearest millimeter. The numbers of insects intercepted by the orbs and those consumed by spiders were also measured by video cameras. Twelve video cameras were used in the experiment, 3 placed in each group. The recordings were conducted daily from 06:00 AM to 02:00 PM for a total of 19 recording days in August and September 2004. Rates of prey interception and consumption were estimated by averaging the number of prey intercepted by webs or consumed by the spiders during 8 h of monitoring. Because the insect interception data fitted well with a Poisson distribution (Pearson χ 2 test, P = 0.7138) ( Steel et al. 1997 ), we also used Poisson regression to examine the relationship between prey interception rate, orb area, and various spider body color treatments. In this analysis, web areas were ranked into the following 4 categories: 200–300, 300–400, 400–500, and 500–600 cm 2 .
### Calculation of color contrasts
Color contrast is the contrast caused by the spectral difference between 2 objective areas, which can only be detected by a visual system with at least 2 photoreceptor types. To calculate color contrast, the illuminance spectrum (the spectrum of the light source), the reflectance spectrum of the objects, and the spectral sensitivities of all photoreceptor types in the visual system were needed. By multiplying the illuminance spectrum with the reflectance spectrum of the object, the color signal of the objective area can be obtained. The spectral sensitivity of each type of photoreceptors was integrated with the color signal to obtain the relative absorption of each photoreceptor type to the color signals. The excitation rate of the photoreceptor was multiplied by a sensitivity factor and further transformed to the theoretical voltage excitation, E , as the nonlinearity of photoreceptor response to light stimulus is considered. With the color hexagon model by Chittka (1992) , the locus of each color signal in the model and the distance between 2 loci of color signals can be calculated as the chromatic contrast.
Seven mature female orchid spiders were collected from the study site, and the reflectance spectra of the various parts of their body were measured with a spectrometer (S2000, Ocean Optics, Inc., Dunedin, Florida) in the laboratory. For each measurement, the illumination leg of the reflection probe (with 6 illumination fibers) was attached to a light source (450 W, xenon arc lamp) and the read leg (with one read fiber) to the spectrometer. The tip of the probe was placed vertically 5 mm above the sample. We measured legs, carapace, green bands on the side and ventrum of the abdomen, the dorsal silver bands, and the green paint used in the field manipulative study. Four measurements of reflectance spectra were made on each body part of each L. magnifica . The means were used in the subsequent calculations of color contrasts. Those of herbaceous vegetations collected from the study sites were obtained in a similar way. We chose 6 species of plants commonly seen in the study sites to assess the color signals of the vegetation background. From each plant species, reflectance spectra were measured from 6 leaves. Data from the 6 plant species were averaged and used in the calculation of color contrasts of spiders' body colorations.
Color signals were generated by multiplying the surface reflectance function and the illumination function of the habitat ( Wandell 1995 ). The fraction of the light reflected by the surfaces of the spiders or plants is the surface reflectance function. The daylight illumination function of the forest understory was obtained from Tso et al. (2004) . We chose the spectral sensitivity functions of the honeybee to determine the photoreceptor excitation for each measured spectra. Honeybee exhibits UV, blue, and green receptors, and such trichromatic color vision is found in almost all major taxa of insects (see review by Briscoe and Chittka 2001 ). Therefore, color contrasts of spiders estimated from visual systems of honeybees should be quite representative. Leucauge magnifica builds horizontal webs in forest understory, and the background will be ground vegetation when the spider is viewed from the above or from the side. On the other hand, the background will be canopy when the spider is viewed from below. Therefore, the color contrasts of most body parts of the orchid spiders were calculated using vegetation as the background. However, because the 2 yellow stripes were embedded in a patch of dark green abdomen ( Figure 1B ), in calculating the color contrasts of these yellow stripes and the paint applied on them, the dark green patch in ventrum was used as the background. The calculations of color contrasts against various backgrounds followed the method of Chittka (1992 , 1996 , 2001 ). One-tailed t -tests were used to compare the color contrast values with the discrimination threshold value of 0.05 estimated for hymenopteran insects ( Théry and Casas 2002 ). Previous studies showed that hymenopterans adopt achromatic vision by using green receptor signal alone when searching for an object far ahead and adopt chromatic vision by using green, blue, and UV receptor signals when approaching the object ( Giurfa et al. 1997 ; Spaethe et al. 2001 ; Heiling et al. 2003 ). In this study, the color contrasts were calculated under these 2 conditions to examine how prey see the orchid spiders against the vegetation background under different chromatic systems.
## RESULTS
### Testing the effect of spiders on prey interception
In this part of the study, data were only included in the analysis when spiders stayed in their orbs for more than 5 h during the video camera monitoring. Valid insect interception data were obtained from 288 h of video recording. Among them, 176 were from the control ( n = 22 spiders) and 112 were from the experimental group ( n = 14 spiders). When the orb area was considered, the insect interception rates of webs in the control group were significantly higher than those of the experimental group ( Table 1 ). Compared with the webs without spiders, those with spiders intercepted almost twice as many insects per hour ( Figure 2 ).
Figure 2
Mean (±standard error) prey interception rates (number of insects per hour) of Leucauge magnifica in the experimental (spider removed) and control (spider remained) groups estimated from video recording.
Figure 2
Mean (±standard error) prey interception rates (number of insects per hour) of Leucauge magnifica in the experimental (spider removed) and control (spider remained) groups estimated from video recording.
Table 1
Results of Poisson regression comparing prey interception rates of orchid spiders estimated by video recordings between experimental (spiders removed) and control group (spider remained) a,b
Poisson regression
Parameters
df
Estimate of β
SE
χ 2
P
Intercept −1.2548 0.1097 9.38 0.0022
Experimental Without spider −0.9002 0.3147 8.18 0.0042
Control With spider — —
Web area 200–300 0.8346 0.5172 2.61 0.1065
Web area 100–200 0.9916 0.437 5.15 0.0233
Web area
0–100
0
0
0
Poisson regression
Parameters
df
Estimate of β
SE
χ 2
P
Intercept −1.2548 0.1097 9.38 0.0022
Experimental Without spider −0.9002 0.3147 8.18 0.0042
Control With spider — —
Web area 200–300 0.8346 0.5172 2.61 0.1065
Web area 100–200 0.9916 0.437 5.15 0.0233
Web area
0–100
0
0
0
SE, standard error.
a
The β of the control group and the orb area 0–200 size category were arbitrarily designated as 0 to facilitate comparison of probabilities of different events.
b
The ratio between probabilities of 2 certain events was e β .
### Testing the effect of spider body coloration on prey interception and consumption
In this part of the study, data were only included in the analysis when spiders stayed in their orbs for more than 5 h during the video camera monitoring. Valid data were available from a total of 448 h of video recording. Among them, 128 were from the control ( n = 16 spiders), 112 from the dorsum-painted ( n = 14 spiders), 112 from the ventrum-painted ( n = 14 spiders), and 96 from both sides–painted groups ( n = 12 spiders). Compared with the insect interception and consumption rates of the control group, those in the dorsum-painted and ventrum-painted groups were lower ( Figure 3 ). However, the differences between these groups did not reach statistical significance ( Tables 2 and 3 ). When considering the orb area, the insect interception and consumption rates of spiders painted on both dorsal and ventral sides were significantly lower than those of the control group ( Tables 2 and 3 ). Compared with spiders whose dorsal and ventral color signals were altered by paint, those in the control group intercepted and consumed 3 times as many insects per hour of monitoring ( Figure 3 ).
Figure 3
Mean (±standard error) prey interception (number of insects per hour) and consumption (number of insects consumed per hour) rates of Leucauge magnifica in the control (green part painted) and experimental (dorsum or ventrum or both sides painted) groups estimated from video recording.
Figure 3
Mean (±standard error) prey interception (number of insects per hour) and consumption (number of insects consumed per hour) rates of Leucauge magnifica in the control (green part painted) and experimental (dorsum or ventrum or both sides painted) groups estimated from video recording.
Table 2
Results of Poisson regression comparing rates of prey interception of orchid spiders estimated by video recordings between experimental (bright bands on dorsum and/or ventrum painted) and control groups (green body parts on both sides of abdomen painted) a,b
Poisson regression
Parameter
df
Estimate of β
SE
χ 2
P
Intercept 0.1995 0.2617 0.58 0.446
Experimental Both side painted −0.7817 0.3921 3.97 0.0462
Experimental Ventrum painted −0.1976 0.3058 0.42 0.5182
Experimental Dorsum painted −0.2529 0.265 0.91 0.3398
Control Green part painted — —
Web area 500–600 −0.9942 0.3341 8.35 0.0039
Web area 400–500 −1.0869 0.3471 9.8 0.0017
Web area 300–400 −1.4969 0.3977 14.16 0.0002
Web area 200–300 −0.8198 0.3283 6.24 0.0125
Web area
100–200
0
0
0
Poisson regression
Parameter
df
Estimate of β
SE
χ 2
P
Intercept 0.1995 0.2617 0.58 0.446
Experimental Both side painted −0.7817 0.3921 3.97 0.0462
Experimental Ventrum painted −0.1976 0.3058 0.42 0.5182
Experimental Dorsum painted −0.2529 0.265 0.91 0.3398
Control Green part painted — —
Web area 500–600 −0.9942 0.3341 8.35 0.0039
Web area 400–500 −1.0869 0.3471 9.8 0.0017
Web area 300–400 −1.4969 0.3977 14.16 0.0002
Web area 200–300 −0.8198 0.3283 6.24 0.0125
Web area
100–200
0
0
0
SE, standard error.
a
The β of the control group and the orb area 100–200 size category were arbitrarily designated as 0 to facilitate comparison of probabilities of different events.
b
The ratio between probabilities of 2 certain events was e β .
Table 3
Results of Poisson regression comparing rates of prey consumption of orchid spiders estimated by video recordings between experimental (bright bands on dorsum and/or ventrum painted) and control groups (green body parts on both sides of abdomen painted) a,b
Poisson regression
Parameter
df
Estimate of β
SE
χ 2
P
Intercept −0.0372 0.2983 0.02 0.9008
Treatment Both sides −1.0221 0.4330 5.57 0.0183
Treatment Ventrum painted −0.2780 0.3235 0.74 0.3902
Treatment Dorsum painted −0.4735 0.2920 0.74 0.3902
Treatment Control — —
Web area 500–600 −0.9246 0.4218 5.29 0.0214
Web area 400–500 −0.9354 0.3906 5.73 0.0166
Web area 300–400 −1.2387 .4350 8.11 0.0044
Web area 200–300 −0.4746 0.3598 1.74 0.1873
Web area
100–200
0
0
0
Poisson regression
Parameter
df
Estimate of β
SE
χ 2
P
Intercept −0.0372 0.2983 0.02 0.9008
Treatment Both sides −1.0221 0.4330 5.57 0.0183
Treatment Ventrum painted −0.2780 0.3235 0.74 0.3902
Treatment Dorsum painted −0.4735 0.2920 0.74 0.3902
Treatment Control — —
Web area 500–600 −0.9246 0.4218 5.29 0.0214
Web area 400–500 −0.9354 0.3906 5.73 0.0166
Web area 300–400 −1.2387 .4350 8.11 0.0044
Web area 200–300 −0.4746 0.3598 1.74 0.1873
Web area
100–200
0
0
0
SE, standard error.
a
The β of the control group and the orb area 100−200 size category were arbitrarily designated as 0 to facilitate comparison of probabilities of different events.
b
The ratio between probabilities of 2 certain events was e β .
### Calculation of color contrasts
Mean reflectance spectra of various body parts of the orchid spider and the leaves of various plants in the study site were used in the calculations of color contrasts. The green body parts of orchid spiders such as legs, carapace, and ventrum had very similar chromatic properties. All of them exhibited low reflectance across all wavelengths measured ( Figure 4C,D ). Such a reflectance pattern was very similar to that of the vegetation background ( Figure 4B ). On the contrary, the dorsal silver bands of orchid spiders reflected a considerable amount of light across all wavelengths measured ( Figure 4E ). The green paint used had a high reflectance at wavelengths between 400 and 550 nm ( Figure 4F ). Color contrasts of various body parts of orchid spiders viewed against the vegetation background under achromatic vision were significantly higher than the discrimination threshold ( Table 4 ). However, under chromatic vision, color contrasts of various green body parts of orchid spiders against the vegetation background were low ( Figure 5 ) and were not significantly greater than the discrimination threshold ( Table 3 ). This result indicates that hymenopteran prey could not distinguish the color signals of green body parts of orchid spiders from the background vegetation from a short distance. Under chromatic vision, color contrasts of the dorsal silver bands of orchid spiders against the vegetation background were high ( Figure 5 ) and were significantly higher than the discrimination threshold ( Table 4 ). The ventral yellow stripes when viewed against the dark green ventrum also exhibited a very high color contrast ( Table 4 and Figure 5 ). The color contrast of green paint used was also significantly higher than the threshold, no matter whether it was seen against the vegetation background or the dark green ventrum ( Table 4 and Figure 5 ).
Figure 4
Mean reflectance spectra of various body parts of the orchid spider Leucauge magnifica . (A) The forest understory daylight illuminating spectrum, (B) vegetation background, (C) carapace and leg, (D) green stripes on abdomen, (E) silver band on the dorsum, and (F) the green paint used in the experimental group.
Figure 4
Mean reflectance spectra of various body parts of the orchid spider Leucauge magnifica . (A) The forest understory daylight illuminating spectrum, (B) vegetation background, (C) carapace and leg, (D) green stripes on abdomen, (E) silver band on the dorsum, and (F) the green paint used in the experimental group.
Figure 5
Mean (±standard error) color contrasts of various body parts of the orchid spider, Leucauge magnifica , against the different vegetation backgrounds and the spiders' green ventrum seen by honeybees under chromatic and achromatic vision. Dashed line represents the threshold for color contrast discrimination calculated for Hymenoptera.
Figure 5
Mean (±standard error) color contrasts of various body parts of the orchid spider, Leucauge magnifica , against the different vegetation backgrounds and the spiders' green ventrum seen by honeybees under chromatic and achromatic vision. Dashed line represents the threshold for color contrast discrimination calculated for Hymenoptera.
Table 4
Results of one-tailed t -tests comparing the color contrasts of various body parts of the orchid spider, Leucauge magnifica , against vegetation background and against dark green ventrum of the spider seen by honeybees under chromatic and achromatic visions with the discrimination threshold of 0.05
Areas examined
Vision
Leg
Carapace
Dark green ventrum
Silvery dorsum
Ventrum stripes
Paint dorsum
Paint ventrum
Chromatic
t6 0.893 0.620 0.707 2.792 0.497 3.608 2.704
P 0.203 0.279 0.253 0.016 0.318 0.006 0.018
Achromatic
t6 16.618 11.721 16.585 21.276 0.052 5.435 3.758
P
<0.001
<0.001
<0.001
<0.001
0.48
<0.001
0.005
Areas examined
Vision
Leg
Carapace
Dark green ventrum
Silvery dorsum
Ventrum stripes
Paint dorsum
Paint ventrum
Chromatic
t6 0.893 0.620 0.707 2.792 0.497 3.608 2.704
P 0.203 0.279 0.253 0.016 0.318 0.006 0.018
Achromatic
t6 16.618 11.721 16.585 21.276 0.052 5.435 3.758
P
<0.001
<0.001
<0.001
<0.001
0.48
<0.001
0.005
## DISCUSSION
Results of this study showed that the colorful spider itself can serve as a visual lure to its prey. In this study, compared with orbs without orchid spiders, those with spiders intercepted almost twice as many insects. Such a result is not congruent with the camouflaging hypothesis, which predicts a similar prey interception rate between orbs with and without spiders. Results of this and previous studies thus demonstrate that orb-weaving spiders do not passively wait for accidentally trapped prey but use various ways to lure prey. Orb weavers such as the spiny spider ( Hauber 2002 ), giant wood spider ( Tso et al. 2002 , 2004 ), garden spider ( Craig and Ebert 1994 ), and hunters such as crab spiders ( Heiling et al. 2003 , 2005 ) use their bright body coloration to lure prey. Various species of the genus Argiope , Cyclosa , and Octonoba incorporate silky structures called decoration in their web to serve as visual lures ( Herberstein et al. 2000 ). Bolas spiders ( Haynes et al. 2002 ) use chemicals mimicking the sex pheromone of their moth prey as attractant, whereas Nephila spiders deposit half-digested prey on webs to attract insects ( Bjorkman-Chiswell et al. 2004 ). Therefore, the traditional view of categorizing orb-weaving spiders as aerial filter feeders that passively sieve prey from the air current flow through their orbs should be reconsidered.
Results of this study also demonstrate that the attractiveness of orchid spiders to their prey is achieved by their bright body coloration. When either the dorsal silver bands or ventral yellow stripes of orchid spiders were painted, the insect interception and consumption rates were reduced but did not reach significance level. However, when the color signals of both dorsum and ventrum were altered, the insect interception and consumption rates were further reduced, and the difference was statistically significant. Such results indicate that both the dorsal silver bands and ventral yellow bands are attractive to insects. When the color signal on either side of the abdomen was altered, that on the other side was still functioning. Thus, the insect attractiveness was somewhat lowered but not significantly. However, when all the color signals were altered, the attractiveness of the spiders was reduced dramatically. It was unlikely that the odor of the paint was responsible for the observed result because in the control group, we also applied green paint on the green part of the abdomen. In all treatment groups, there was paint on the body of spiders, and therefore, the observed variation in prey capture among them should be irrelevant with the odor of paint.
The attractiveness of the orchid spider's body coloration seems to be achieved by the properties of the color signal, rather than the visibility of the spider. In the early stage of this study, when choosing appropriate paint with which to alter the color signal of the spider, we purposely used a paint exhibiting a reflectance spectrum different from that of the spiders. The color contrasts of green paint viewed either against the vegetation background or spiders' dark green ventrum were significantly higher than the discrimination threshold, indicating that the paint used could be readily seen by the insects. However, given such high visibility, these painted spiders still intercepted and consumed far fewer insects than the control group. Such results indicate that the reflectance properties of orb-weaving spiders' body coloration are quite critical to their insect interception. The properties of their color signal have been fine-tuned by selection to achieve the best attractiveness to their prey. Once such property was altered, even though the changed coloration was still quite visible, they were no longer attractive to insects. Currently, it is not clear why the color signals of these body colorations are attractive to insects. The color signals of orb-weaving spiders may be similar to those of flowers and new leaves ( Propoky and Owens 1983 ); thus, these spiders are perceived by their prey as some form of resource. It is necessary to conduct field studies to find out what resources these colorations are mimicking to determine whether these orb-weaving spiders are exploiting the visual system of their prey.
Insects see by detecting the contrasts between objects and their environments, and all kinds of color receptors and signals are involved ( Chittka and Menzel 1992 ; Vorobyev and Brandt 1997 ; Briscoe and Chittka 2001 ). We suggest that all types of receptor signals should be considered when exploring the visual interactions between predators and prey. Numerous studies have tried to manipulate the UV signal of the system, and they did find that in some cases the attractiveness of the spider body coloration or silk decorations was affected ( Craig and Bernard 1990 ; Tso 1996 ; Watanabe 1999 ; Li et al. 2004 ). The results of these studies can be interpreted such that manipulation altered the insects' perception; thus, they were no longer attracted by the altered color signal. In this study, however, we did not alter the UV signal of the spider but used paint with a strong reflectance in the yellow–green spectra. Such treatment was equally effective in reducing the attractiveness of orchid spiders' body coloration. This result indicates that when the color signal is altered, no matter whether the change is in the UV, green, or blue spectra, such alternation will affect the relative excitations of receptors. Subsequently, the recipient organism has a different perception of the signal perceived and alters its behavioral responses.
Various body parts of orchid spiders differ considerably in brightness and color contrasts, and such a pattern is commonly seen in numerous genera of orb-weaving spiders such as Nephila , Argiope , and spiny spiders ( Yaginuma 1986 ). We suggest that the co-occurrence of low– and high–color contrast body parts in these orb-weaving spiders may be an adaptive morphological trait. Because the bright coloration of orb-weaving spiders is attractive to insects, if the whole body is covered by high-contrast coloration, the contour of the spider will be more than obvious to insects. Prey will quickly learn to associate that with danger by recognizing the shape of the images. The presence of low-contrast colorations, however, changes the appearance of the spiders. Break in contour due to low-contrast body colorations plus the resource-mimicking color signals of high-contrast body color make it difficult for insects to associate these spiders with predation risk. Another advantage of such contour-breaking coloration might be to reduce predation risk. Most predators of these orb-weaving spiders, such as birds and parasitoid wasps ( Coville 1987 ; Blackledge and Pickett 2000 ; Blackledge and Wenzel 2001 ; Craig et al. 2001 ), are visually orientated. A spider covered by a large area of high-contrast colorations makes it easily detected by predators. Therefore, the presence of low-contrast coloration to break the contour of the body and high-contrast coloration to attract prey seems to be a product of various counteracting selection pressures involved in spider–insect visual interactions.
We wish to thank T. Y. Cho, Y. S. Hong, J. Hou, L. F. Chen, and J. Rykken for their assistance in the field and laboratory. Special thanks are given to Dr J. L. Huang, the director of Lien-Hua-Chih Research Center, for all sorts of logistic supports. This work was supported by grants from the National Science Council, Taiwan, ROC (NSC-93-2311-B-029-001, NSC-94-2311-B-029-004) to I.-M.T.
## References
1997
. Ultraviolet colour vision and ornamentation in bluethroats.
Proc R Soc Lond B Biol Sci
264
:
1587
–91.
Bjorkman-Chiswell B, Kulinski MM, Muscat RL, Nguyen KA, Norton BA, Symonds MRE, Westhorpe GE, Elgar MA.
2004
. Web-building spiders attract prey by storing decaying matter.
Naturwissenschaften
91
:
245
–8.
Blackledge TA.
1998
. Signal conflict in spider webs driven by predators and prey.
Proc R Soc Lond B Biol Sci
265
:
1991
–6.
Blackledge TA, Pickett KM.
2000
. Predatory interactions between mud-dauber wasps (Hymenoptera, Sphecidae) and Argiope (Araneae, Araneidae) in captivity.
J Arachnol
28
:
211
–6.
Blackledge TA, Wenzel JW.
2001
. Silk mediated defense by an orb web spider against predatory mud-dauber wasp.
Behaviour
138
:
155
–71.
2001
. The evolution of colour vision in insects.
Annu Rev Entomol
46
:
471
–510.
Chittka L.
1992
. The colour hexagon: a chromaticity diagram based on photoreceptor excitation as a generalized representation of colour opponency.
J Comp Physiol A Neuroethol Sens Neural Behav Physiol
170
:
533
–43.
Chittka L.
1996
. Optimal sets of colour receptors and opponent process for coding of natural objects in insect vision.
J Theor Biol
181
:
179
–96.
Chittka L.
2001
. Camouflage of predator crab spiders on flowers and the colour perception of bees (Aranida: Thomisidae/Hymenoptera: Apidae).
Entomol Gen
25
:
181
–7.
Chittka L, Menzel R.
1992
. The evolutionary adaptation of flower colours and the insect pollinators‘ colour vision.
J Comp Physiol A Neuroethol Sens Neural Behav Physiol
171
:
171
–81.
Coville RE.
1987
. Spider-hunting sphecid wasps. In: Nentwig W, editor. Ecophysiology of spiders. Berlin, Germany: Springer-Verlag. p 309–27.
Craig C L, Bernard GD.
1990
. Insect attraction and ultraviolet-reflecting spider webs and web decorations.
Ecology
71
:
616
–20.
Craig CL, Ebert K.
1994
. Colour and pattern in predator-prey interactions: the bright body colours and patterns of a tropical orb-spinning spider attract flower-seeking prey.
Funct Ecol
8
:
616
–20.
Craig CL, Wolf SG, Davis JLD, Hauber ME, Maas JL.
2001
. Signal polymorphism in the web-decorating spider Argiope argentata is correlated with reduced survivorship and the presence of stingless bees, its primary prey.
Evolution
55
:
986
–93.
Doucet SM, Montgomerie R.
2003
. Multiple sexual ornaments in satin bowerbirds: ultraviolet plumages and bowers signal different aspects of male quality.
Behav Ecol Sociobiol
14
:
503
–9.
Giurfa M, Vorobyev M, Brandt R, Posner B, Menzel R.
1997
. Discrimination of coloured stimuli by honeybees: alternative use of achromatic and chromatic signals.
J Comp Physiol A Neuroethol Sens Neural Behav Physiol
180
:
235
–43.
Grether GF.
2000
. Carotenoid limitation and mate preference evolution: a test of the indicator hypothesis in guppies ( Poecilia reticulate ).
Evolution
54
:
1712
–14.
Hauber ME.
2002
. Conspicuous coloration attracts prey to a stationary predator.
Ecol Entomol
27
:
686
–91.
Haynes KF, Gemeno C, Yeargan KV, Millar JG, Johnson KM.
2002
. Aggressive chemical mimicry of moth pheromones by bolas spider: how does this specialist predator attack more than one species of prey?
Chemoecology
12
:
99
–105.
Heiling AM, Chittka L, Chen K, Herberstein ME.
2005
. Coloration in crab spiders: substrate choice and prey attraction.
J Exp Biol
208
:
1785
–92.
Heiling AM, Herberstein ME, Chittka L.
2003
. Crab-spiders manipulate flower signals.
Nature
421
:
334
.
Herberstein ME, Craig CL, Coddington JA, Elgar MA.
2000
. The functional significance of silk decorations of orb-web spiders: a critical review of the empirical evidence.
Biol Rev Camb Philos Soc
75
:
649
–69.
Herberstein ME, Tso IM.
2000
. Evaluation of formulae to estimate the capture area and mesh height of orb webs.
J Arachnol
28
:
180
–4.
Johnsen A, Andersson S, Ornberg J, Lifjeld JT.
1998
. Ultraviolet plumage ornamentation affects social mate choice and sperm competition in bluethroats (Aves: Luscinia s. svecica ): a field experiment.
Proc R Soc Lond B Biol Sci
265
:
1313
–8.
Li D, Lim MLM, Seah WK, Tay SL.
2004
. Prey attraction as a possible function of discoid stabilimenta of juvenile orb-spinning spiders.
Anim Behav
68
:
629
–35.
MacDougall AK, Montgomerie R.
2003
. Assortative mating by carotenoid-based plumage colour: a quality indicator in American goldfinches, Carduelis tristis.
Naturwissenschaften
90
:
464
–7.
McGraw KJ, Hill GE.
2000
. Differential effects of endoparasitism on the expression of carotenoid- and melanin-based ornamental coloration.
Proc R Soc Lond B Biol Sci
267
:
1525
–31.
Nyffeler M.
2000
. Ecological impact of spider predation: a critical assessment of Bristowe's and Turnbull's estimates.
Bull Br Arachnol Soc
11
:
367
–73.
Oxford GS, Gillespie RG.
1998
. Evolution and ecology of spider coloration.
Annu Rev Entomol
43
:
619
–43.
Petrie N, Halliday T.
1994
. Experimental and natural changes in the peacock's ( Pavo cristatus ) train can affect mating success.
Behav Ecol Sociobiol
35
:
213
–7.
Prokopy RJ, Owens ED.
1983
. Visual detection of plants by herbivorous insects.
Annu Rev Entomol
28
:
337
–64.
Rodd FH, Hughes KA, Grether GF, Baril CT.
2002
. A possible non-sexual origin of mate preferences: are male guppies mimicking fruit?
Proc R Soc Lond B Biol Sci
269
:
475
–81.
Rutowski RL.
1988
. Mating strategies in butterflies.
Sci Am
279
:
64
–9.
Ruxton GD, Sherratt TN, Speed MP.
2004
. Avoiding attack: the evolutionary ecology of crypsis, warning signals and mimicry. Oxford: Oxford University Press.
Spaethe J, Tautz J, Chittka L.
2001
. Visual constraints in foraging bumblebees: flower size and colour affects search time and flight behavior.
98
:
3898
–903.
Steel RGD, Torrie JH, Dickey DA.
1997
. Principles and procedures of statistics: a biometrical approach. New York: McGraw-Hill Press. p 558–61.
Stuart-Fox DM, Moussalli A, Marshall NJ, Owens IPF.
2003
. Conspicuous males suffer higher predation risk: visual modelling and experimental evidence from lizards.
Anim Behav
66
:
541
–50.
Théry M, Casas J.
2002
. Predator and prey views of spider camouflage.
Nature
415
:
133
.
Tso IM.
1996
. A test of the insect attraction function of silk stabilimenta [PhD dissertation]. Ann Arbor, MI: University of Michigan.
Tso IM, Lin CW, Yang EC.
2004
. Colourful orb-weaving spiders and web decorations through a bee's eyes.
J Exp Biol
207
:
2631
–7.
Tso IM, Tai PL, Ku TH, Kuo CH, Yang EC.
2002
. Colour-associated foraging success and population genetic structure in a sit-and-wait predator Nephila maculata (Araneae: Tragnathidae).
Anim Behav
63
:
175
–82.
Vorobyev M, Brandt R.
1997
. How do insect pollinators discriminate colours?
Isr J Plant Sci
45
:
103
–13.
Wandell BA.
1995
. Foundations of vision. Sunderland, MA: Sinauer Associates, Inc.
Watanabe T.
1999
. Prey attraction as a possible function of the silk decoration of the uloborid spider Octonoba sybotides.
Behav Ecol
5
:
607
–11.
Wise DH.
1993
. Spiders in ecological webs. Cambridge, UK: Cambridge University Press.
Yaginuma T.
1986
. Spiders of Japan in colour. Osaka, Japan: Hoikusha Publishing Company (in Japanese).
Zschokke S.
2002
. Ultraviolet reflectance of spiders and their webs.
J Arachnol
30
:
246
–54.
## Author notes
aDepartment of Life Science, Tunghai University, Taichung 407, Taiwan, bCenter for Tropical Ecology and Biodiversity, Tunghai University, Taichung 407, Taiwan, and cDepartment of Entomology, National Chung Hsing University, Taichung, Taiwan | 2016-12-09 19:46:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6547993421554565, "perplexity": 5820.020267700432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542765.40/warc/CC-MAIN-20161202170902-00092-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://docs.twoears.eu/en/1.2/afe/processors/ | # Available processors¶
This section presents a detailed description of all processors that are currently supported by the Auditory front-end framework. Each processor can be controlled by a set of parameters, which will be explained and all default settings will be listed. Finally, a demonstration will be given, showing the functionality of each processor. The corresponding Matlab files are contained in the Auditory front-end folder /test and can be used to reproduce the individual plots. A full list of available processors can be displayed by using the command requestList. An overview of the commands for instantiating processors is given in Computation of an auditory representation.
## Pre-processing (preProc.m)¶
Prior to computing any of the supported auditory representations, the input signal stored in the data object can be pre-processed with one of the following elements:
1. DC bias removal
2. Pre-emphasis
3. RMS normalisation using an automatic gain control
4. Level scaling to a pre-defined SPL reference
5. Middle ear filtering
The order of processing is fixed. However, individual stages can be activated or deactivated, depending on the requirement of the user. The output is a time domain signal representation that is used as input to the next processors. Moreover, a list of adjustable parameters is listed in Table 4.
Table 4 List of parameters related to the auditory representation ’time’.
Parameter Default Description
pp_bRemoveDC false Activate DC removal filter
pp_cutoffHzDC 20 Cut-off frequency in Hz of the high-pass filter
pp_bPreEmphasis false Activate pre-emphasis filter
pp_coefPreEmphasis 0.97 Coefficient of first-order high-pass filter
pp_bNormalizeRMS false Activate RMS normalisation
pp_intTimeSecRMS 2 Time constant in s used for RMS estimation
pp_bBinauralRMS true Link RMS normalisation across both ear signals
pp_bLevelScaling false Apply level scaling to the given reference
pp_refSPLdB 100 Reference dB SPL to correspond to the input RMS
pp_bMiddleEarFiltering false Apply middle ear filtering
pp_middleEarModel 'jepsen' Middle ear filter model
The influence of each individual pre-processing stage except for the level scaling is illustrated in Fig. 7, which can be reproduced by running the script DEMO_PreProcessing.m. Panel 1 shows the left and the right ears signals of two sentences at two different levels. The ear signals are then mixed with a sinusoid at 0.5 Hz to simulate an interfering humming noise. This humming can be effectively removed by the DC removal filter, as shown in panel 3. Panel 4 shows the influence of the pre-emphasis stage. The AGC can be used to equalise the long-term RMS level difference between the two sentences. However, if the level difference between both ear signals should be preserved, it is important to synchronise the AGC across both channels, as illustrated in panel 5 and 6. Panel 7 shows the influence of the level scaling when using a reference value of 100 dB SPL. Panel 8 shows the signals after middle ear filtering, as the stapes motion velocity. Each individual pre-processing stage is described in the following subsections.
### DC removal filter¶
To remove low-frequency humming, a DC removal filter can be activated by using the flag pp_bRemoveDC = true. The DC removal filter is based on a fourth-order IIR Butterworth filter with a cut-off frequency of 20 Hz, as specified by the parameter pp_cutoffHzDC = 20.
### Pre-emphasis¶
A common pre-processing stage in the context of ASR includes a signal whitening. The goal of this pre-processing stage is to roughly compensate for the decreased energy at higher frequencies (e.g. due to lip radiation). Therefore, a first-order FIR high-pass filter is employed, where the filter coefficient pp_coefPreEmphasis determines the amount of pre-emphasis and is typically selected from the range between 0.9 and 1. Here, we set the coefficient to pp_coefPreEmphasis = 0.97 by default according to [Young2006]. This pre-emphasis filter can be activated by setting the flag pp_bPreEmphasis = true.
### RMS normalisation¶
A signal level normalisation stage is available which can be used to equalise long-term level differences (e.g. when recording two speakers at two different distances). For some applications, such as ASR and speaker identification systems, it can be advantageous to maintain a constant signal power, such that the features extracted by subsequent processors are invariant to the overall signal level. To achieve this, the input signal is normalised by its RMS value that has been estimated by a first-order low-pass filter with a time constant of pp_intTimeSecRMS = 2. Such a normalisation stage has also been suggested in the context of AMS feature extraction [Tchorz2003], which are described in Amplitude modulation spectrogram (modulationProc.m). The choice of the time constant is a balance between maintaining the level fluctuations across individual words and allowing the normalisation stage to follow sudden level changes.
The normalisation can be either applied independently for the left and the right ear signal by setting the parameter pp_bBinauralRMS = false, or the processing can be linked across ear signals by setting pp_bBinauralRMS = true. When being used in the binaural mode, the larger RMS value of both ear signals is used for normalisation, which will preserve the binaural cues (e.g. ITD and ILD) that are encoded in the signal. The RMS normalisation can be activated by the parameter pp_bNormalizeRMS = true.
### Level reference and scaling¶
This stage is designed to implement the effect of calibration, in which the amplitude of the incoming digital signal is matched to sound pressure in the physical domain. This operation is necessary when any of the Auditory front-end models requires the input to be represented in physical units (such as pascals, see the middle ear filtering stage below). Within the current Auditory front-end framework, the DRNL filter bank model requires this signal representation (see Dual-resonance non-linear filter bank (drnlProc.m)). The request for this is given by setting pp_bApplyLevelScaling = true, with a reference value pp_refSPLdB in dB SPL which should correspond to the input RMS of 1. Then the input signal is scaled accordingly, if it had been calibrated to a different reference. The default value of pp_refSPLdB is 100, which corresponds to the convention used in the work of [Jepsen2008]. The implementation is adopted from the Auditory Modeling Toolbox [Soendergaard2013].
### Middle ear filtering¶
This stage corresponds to the operation of the middle ear where the vibration from the eardrum is transformed into the stapes motion. The filter model is based on the findings from the measurement of human stapes displacement by [Godde1994]. Its implementation is adopted from the Auditory Modeling Toolbox [Soendergaard2013], which derives the stapes velocity as the output [Lopez-Poveda2001], [Jepsen2008]. The input is assumed to be the eardrum pressure represented in pascals which in turn assumes prior calibration. This input-output representation in physical units is required particularly when the DRNL filter bank model is used for the BM operation, because of its level-dependent nonlinearity, designed based on that representation (see Dual-resonance non-linear filter bank (drnlProc.m)). When including the middle-ear filtering in combination with the linear gammatone filter, only the simple band-pass characteristic of this model is needed without the need for input calibration or consideration of the input/output units. The middle ear filtering can be applied by setting pp_bMiddleEarFiltering = true. The filter data from [Lopez-Poveda2001] or from [Jepsen2008] can be used for the processing, by specifying the model pp_middleEarModel = 'lopezpoveda' or pp_middleEarModel = 'jepsen' respectively.
## Auditory filter bank¶
One central processing element of the Auditory front-end is the separation of incoming acoustic signals into different spectral bands, as it happens in the human inner ear. In psychoacoustic modelling, two different approaches have been followed over the years. One is the simulation of this stage by a linear filter bank composed of gammatone filters. This linear gammatone filter bank can be considered a standard element for auditory models and has therefore been included in the framework. A computationally more challenging, but at the same time physiologically more plausible simulation of this process can be realised by a nonlinear BM model, and we have implemented the DRNL model, as developed by [Meddis2001]. The filter bank representation is requested by using the name tag 'filterbank'. The filter bank type can be controlled by the parameter fb_type. To select a gammatone filter bank, fb_type should be set to ’gammatone’ (which is the default), whereas the DRNL filter bank is used when setting fb_type = 'drnl'. Some of the parameters are common to the two filter bank, while some are specific, in which case their value is disregarded if the other type of filter bank was requested. Table 5 summarises all parameters corresponding to the 'filterbank' request. Parameters specific to a filter bank type are separated by a horizontal line. The two filter bank implementations are described in detail in the following two subsections, along with their corresponding parameters.
Table 5 List of parameters related to the auditory representation 'filterbank'
Parameter Default Description
fb_type 'gammatone' Filter bank type, 'gammatone' or 'drnl'
fb_lowFreqHz 80 Lowest characteristic frequency in Hz
fb_highFreqHz 8000 Highest characteristic frequency in Hz
fb_nERBs 1 Distance between adjacent filters in ERB
fb_nChannels [] Number of frequency channels
fb_cfHz [] Vector of characteristic frequencies in Hz
fb_nGamma 4 Filter order, 'gammatone'-only
fb_bwERBs 1.01859 Filter bandwidth in ERB, 'gammatone'-only
fb_lowFreqHz 80 Lowest characteristic frequency in Hz, 'gammatone'-only
fb_mocIpsi 1
Ipsilateral MOC factor (0 to 1). Given as a scalar
(across all
frequency channels) or a vector (individual per frequency
channel), 'drnl'-only
fb_mocContra 1
Contralateral MOC factor (0 to 1). Same format as
'fb_mocIpsi', 'drnl'-only
fb_model 'CASP' DRNL model (reserved for future extension), 'drnl'-only
### Gammatone (gammatoneProc.m)¶
The time domain signal can be processed by a bank of gammatone filters that simulates the frequency selective properties of the human BM. The corresponding Matlab function is adopted from the Auditory Modeling Toolbox [Soendergaard2013]. The gammatone filters cover a frequency range between fb_lowFreqHz and fb_highFreqHz and are linearly spaced on the ERB scale [Glasberg1990]. In addition, the distance between adjacent filter centre frequencies on the ERB scale can be specified by fb_nERBs, which effectively controls the frequency resolution of the gammatone filter bank. There are three different ways to control the centre frequencies of the individual gammatone filters:
1. Define a vector with centre frequencies, e.g. fb_cfHz = [100 200 500 ...]. In this case, the parameters fb_lowFreqHz, fb_highFreqHz, fb_nERBs and fb_nChannels are ignored.
2. Specify fb_lowFreqHz, fb_highFreqHz and fb_nChannels. The requested number of filters fb_nChannels will be spaced between fb_lowFreqHz and fb_highFreqHz. The centre frequencies of the first and the last filter will match with fb_lowFreqHz and fb_highFreqHz, respectively. To accommodate an arbitrary number of filters, the spacing between adjacent filters fb_nERBs will be automatically adjusted. Note that this changes the overlap between neighbouring filters.
3. It is also possible to specify fb_lowFreqHz, fb_highFreqHz and fb_nERBs. Starting at fb_lowFreqHz, the centre frequencies will be spaced at a distance of fb_nERBs on the ERB scale until the specified frequency range is covered. The centre frequency of the last filter will not necessarily match with fb_highFreqHz.
The filter order, which determines the slope of the filter skirts, is set to fb_nGamma = 4 by default. The bandwidths of the gammatone filters depend on the filter order and the centre frequency, and the default scaling factor for a forth-order filter is approximately fb_bwERBs = 1.01859. When adjusting the parameter fb_bwERBs, it should be noted that the resulting filter shape will deviate from the original gammatone filter as measured by [Glasberg1990]. For instance, increasing fb_bwERBs leads to a broader filter shape. A full list of parameters is shown in Table 5.
The gammatone filter bank is illustrated in Fig. 8, which has been produced by the script DEMO_Gammatone.m. The speech signal shown in the left panel is passed through a bank of 16 gammatone filters spaced between 80 Hz and 8000 Hz. The output of each individual filter is shown in the right panel.
### Dual-resonance non-linear filter bank (drnlProc.m)¶
The DRNL filter bank models the nonlinear operation of the cochlear, in addition to the frequency selective feature of the BM. The DRNL processor was motivated by attempts to better represent the nonlinear operation of the BM in the modelling, and allows for testing the performance of peripheral models with the BM nonlinearity and MOC feedback in comparison to that with the conventional linear BM model. All the internal representations that depend on the BM output can be extracted using the DRNL processor in the dependency chain in place of the gammatone filter bank. This can reveal the implication of the BM nonlinearity and MOC feedback for activities such as speech perception in noise (see [Brown2010] for example) or source localisation. It is expected that the use of a nonlinear model, together with the adaptation loops (see Adaptation (adaptationProc.m)), will reduce the influence of overall level on the internal representations and extracted features. In this sense, the use of the DRNL model is a physiologically motivated alternative for a linear BM model where the influence of level is typically removed by the use of a level normalisation stage (see AGC in Pre-processing (preProc.m) for example). The structure of DRNL filter bank is based on the work of [Meddis2001]. The frequencies corresponding to the places along the BM, over which the responses are to be derived and observed, are specified as a list of characteristic frequencies fb_cfHz. For each characteristic frequency channel, the time domain input signal is passed through linear and nonlinear paths, as seen in Fig. 9. Currently the implementation follows the model defined as CASP by [Jepsen2008], in terms of the detailed structure and operation, which is specified by the default argument 'CASP' for fb_model.
In the CASP model, the linear path consists of a gain stage, two cascaded gammatone filters, and four cascaded low-pass filters; the nonlinear path consists of a gain (attenuation) stage, two cascaded gammatone filters, a ’broken stick’ nonlinearity stage, two more cascaded gammatone filters, and a low-pass filter. The outputs at the two paths are then summed as the BM output motion. These sub-modules and their individual parameters (e.g., gammatone filter centre frequencies) are specific to the model and hidden to the users. Details regarding the original idea behind the parameter derivation can be found in [Lopez-Poveda2001], which the CASP model slightly modified to provide a better fit of the output to physiological findings from human cochlear research works.
The MOC feedback is implemented in an open-loop structure within the DRNL filter bank model as the gain factor to be applied to the nonlinear path. This approach is used by [Ferry2007], where the attenuation caused by MOC the feedback at each of the filter bank channels is controlled externally by the user. Two additional input arguments are introduced for this feature: fb_mocIpsi and fb_mocContra. These represent the amount of reflexive feedback through the ipsilateral and contralateral paths, in the form of a factor from 0 to 1 that the nonlinear path input signal is multiplied by in conjunction. Conceptually, fb_mocIpsi = 1 and fb_mocContra = 1 would mean that no attenuation is applied to the nonlinear path input, and fb_mocIpsi = 0 and fb_mocContra = 0 would mean that the nonlinear path is totally eliminated. Table 5 summarises the parameters for DRNL the processor that can be controlled by the user. Note that fb_cfHz corresponds to the characteristic frequencies and not the centre frequencies as used in the gammatone filter bank, although they can have the same values for comparison. Otherwise, the characteristic frequencies can be generated in the same way as the centre frequencies for the gammatone filter bank.
Fig. 10 shows the BM stage output at 1 kHz characteristic frequency using the DRNL processor (on the right hand side), compared to that using the gammatone filter bank (left hand side), based on the right ear input signal shown in panel 1 of Fig. 7 (speech excerpt repeated twice with a level difference). The plots can be generated by running the script DEMO_DRNL.m. It should be noted that the CASP model of DRNL filter bank expects the input signal to be transformed to the middle ear stapes velocity before processing. Therefore, for direct comparison of the outputs in this example, the same pre-processing was applied for the gammatone filter bank (stapes velocity was used as the input, through the level scaling and middle ear filtering). It is seen that the level difference between the initial speech component and its repetition is reduced with the nonlinearity incorporated, compared to the gammatone filter bank output, which shows the compressive nature of the nonlinear model responding to input level changes as described earlier.
## Inner hair-cell (ihcProc.m)¶
The IHC functionality is simulated by extracting the envelope of the output of individual gammatone filters. The corresponding IHC function is adopted from the Auditory Modeling Toolbox [Soendergaard2013]. Typically, the envelope is extracted by combining half-wave rectification and low-pass filtering. The low-pass filter is motivated by the loss of phase-locking in the auditory nerve at higher frequencies [Bernstein1996], [Bernstein1999]. Depending on the cut-off frequency of the IHC models, it is possible to control the amount of fine-structure information that is present in higher frequency channels. The cut-off frequency and the order of the corresponding low-pass filter vary across methods and a complete overview of supported IHC models is given in Table 6. A particular model can be selected by using the parameter ihc_method.
Table 6 List of supported IHC models
ihc_method Description
'hilbert' Hilbert transform
'halfwave' Half-wave rectification
'fullwave' Full-wave rectification
'square' Squared
'dau' Half-wave rectification and low-pass filtering at 1000 Hz [Dau1996]
'joergensen' Hilbert transform and low-pass filtering at 150 Hz [Joergensen2011]
'breebart' Half-wave rectification and low-pass filtering at 770 Hz [Breebart2001]
'bernstein' Half-wave rectification, compression and low-pass filtering at 425 Hz [Bernstein1999]
The effect of the IHC processor is demonstrated in Fig. 11, where the output of the gammatone filter bank is compared with the output of an IHC model by running the script DEMO_IHC.m. Whereas individual peaks are resolved in the lowest channel of the IHC output, only the envelope is retained at higher frequencies.
## Adaptation (adaptationProc.m)¶
This processor corresponds to the adaptive response of the auditory nerve fibers, in which abrupt changes in the input result in emphasised overshoots followed by gradual decay to compressed steady-state level [Smith1977], [Smith1983]. The function is adopted from the Auditory Modeling Toolbox [Soendergaard2013]. The adaptation stage is modelled as a chain of five feedback loops in series. Each of the loops consists of a low-pass filter with its own time constant, and a division operator [Pueschel1988], [Dau1996], [Dau1997a]. At each stage, the input is divided by its low-pass filtered version. The time constant affects the charging / releasing state of the filter output at a given moment, and thus affects the amount of attenuation caused by the division. This implementation realises the characteristics of the process that input variations which are rapid compared to the time constants are linearly transformed, whereas stationary input signals go through logarithmic compression.
Table 7 List of parameters related to 'adaptation'.
Parameter Default Description
adpt_lim 10 Overshoot limiting ratio
adpt_mindB 0
Lowest audible threshold of the signal
in dB SPL
adpt_tau [0.005 0.050 0.129 0.253 0.500] Time constants of feedback loops
adpt_model ''(empty)
Implementation model 'adt_dau',
'adt_puschel', or 'adt_breebart'
can be used instead of the above three
parameters (See Table 8)
The adaptation processor uses three parameters to generate the output from the IHC representation: adpt_lim determines the maximum ratio of the onset response amplitude against the steady-state response, which sets a limit to the overshoot caused by the loops. adpt_mindB sets the lowest audible threshold of the input signal. adpt_tau are the time constants of the loops. Though the default model uses five loops and thus five time constants, variable number of elements of adpt_tau is supported which can vary the number of loops. Some specific sets of these parameters, as used in related studies, are also supported optionally with the adpt_model parameter. This can be given instead of the other three parameters, which will set them as used by the respective researchers. Table 7 lists the parameters and their default values, and Table 8 lists the supported models. The output signal is expressed in MU which deviates the input-output relation from a perfect logarithmic transform, such that the input level increment at low level range results in a smaller output level increment than the input increment at higher level range. This corresponds to a smaller just-noticeable level change at high levels than at low levels [Dau1996], [Jepsen2008], with the use of DRNL model for the BM stage, introduces an additional squaring expansion process between the IHC output and the adaptation stage, which transforms the input that comes through the DRNL-IHC processors into an intensity-like representation to be compatible with the adaptation implementation originally designed based on the use of gammatone filter bank. The adaptation processor recognises whether DRNL or gammatone processor is used in the chain and adjusts the input signal accordingly.
Table 8 List of supported models related to 'adaptation'.
adpt_model Description
'adt_dau'
Choose the parameters as in the models of [Dau1996], [Dau1997a].
This consists of 5 adaptation loops with an overshoot limit of 10 and
a minimum level of 0 dB. This is a correction in regard to the model
described in [Dau1996], which did not use overshoot limiting. The
adaptation loops have an exponentially spaced time constants
adpt_tau=[0.005 0.050 0.129 0.253 0.500]
'adt_puschel'
Choose the parameters as in the original model [Pueschel1988].
This consists of 5 adaptation loops without overshoot limiting
(adpt_lim=0). The adaptation loops have a linearly spaced time
constants adpt_tau=[0.0050 0.1288 0.2525 0.3762 0.5000].
'adt_breebaart' As 'adt_puschel', but with overshoot limiting
The effect of the adaptation processor - the exaggeration of rapid variations - is demonstrated in Fig. 12, where the output of the IHC model from the same input as used in the example of Inner hair-cell (ihcProc.m) (the right panel of Fig. 11) is compared to the adaptation output by running the script DEMO_Adaptation.m.
## Auto-correlation (autocorrelationProc.m)¶
Auto-correlation is an important computational concept that has been extensively studied in the context of predicting human pitch perception [Licklider1951], [Meddis1991]. To measure the amount of periodicity that is present in individual frequency channels, the ACF is computed in the FFT domain for short time frames based on the IHC representation. The unbiased ACF scaling is used to account for the fact that fewer terms contribute to the ACF at longer time lags. The resulting ACF is normalised by the ACF at lag zero to ensure values between minus one and one. The window size ac_wSizeSec determines how well low-frequency pitch signals can be reliably estimated and common choices are within the range of 10 milliseconds – 30 milliseconds.
For the purpose of pitch estimation, it has been suggested to modify the signal prior to correlation analysis in order to reduce the influence of the formant structure on the resulting ACF [Rabiner1977]. This pre-processing can be activated by the flag ac_bCenterClip and the following nonlinear operations can be selected for ac_ccMethod: centre clip and compress ’clc’, centre clip ’cc’, and combined centre and peak clip ’sgn’. The percentage of centre clipping is controlled by the flag ac_ccAlpha, which sets the clipping level to a fixed percentage of the frame-based maximum signal level.
A generalised ACF has been suggested by [Tolonen2000], where the exponent ac\_K can be used to control the amount of compression that is applied to the ACF. The conventional ACF function is computed using a value of ac\_K=2, whereas the function is compressed when a smaller value than 2 is used. The choice of this parameter is a trade-off between sharpening the peaks in the resulting ACF function and amplifying the noise floor. A value of ac\_K = 2/3 has been suggested as a good compromise [Tolonen2000]. A list of all ACF-related parameters is given in Table 9. Note that these parameters will influence the pitch processor, which is described in Pitch (pitchProc.m).
Table 9 List of parameters related to the auditory representation 'autocorrelation'.
Parameter Default Description
ac_wname 'hann' Window type
ac_wSizeSec 0.02 Window duration in s
ac_hSizeSec 0.01 Window step size in s
ac_bCenterClip false Activate centre clipping
ac_clipMethod 'clp' Centre clipping method 'clc', 'clp', or 'sgn'
ac_clipAlpha 0.6 Centre clipping threshold within [0,1]
ac_K 2 Exponent in ACF
A demonstration of the ACF processor is shown in Fig. 13, which has been produced by the scrip DEMO_ACF.m. It shows the IHC output in response to a 20 ms speech signal for 16 frequency channels (left panel). The corresponding ACF is presented in the upper right panel, whereas the SACF is shown in the bottom right panel. Prominent peaks in the SACF indicate lag periods which correspond to integer multiples of the fundamental frequency of the analysed speech signal. This relationship is exploited by the pitch processor, which is described in Pitch (pitchProc.m).
## Rate-map (ratemapProc.m)¶
The rate-map represents a map of auditory nerve firing rates [Brown1994] and is frequently employed as a spectral feature in CASA systems [Wang2006], ASR [Cooke2001] and speaker identification systems [May2012]. The rate-map is computed for individual frequency channels by smoothing the IHC signal representation with a leaky integrator that has a time constant of typically rm\_decaySec=8 ms. Then, the smoothed IHC signal is averaged across all samples within a time frame and thus the rate-map can be interpreted as an auditory spectrogram. Depending on whether the rate-map scaling rm_scaling has been set to ’magnitude’ or ’power’, either the magnitude or the squared samples are averaged within each time frame. The temporal resolution can be adjusted by the window size rm_wSizeSec and the step size rm_hSizeSec. Moreover, it is possible to control the shape of the window function rm_wname, which is used to weight the individual samples within a frame prior to averaging. The default rate-map parameters are listed in Table 10.
Table 10 List of parameters related to 'ratemap'.
Parameter Default Description
'rm_wname' 'hann' Window type
'rm_wSizeSec' 0.02 Window duration in s
'rm_hSizeSec' 0.01 Window step size in s
'rm_scaling' 'power' Rate-map scaling ('magnitude' or 'power')
'rm_decaySec' 0.008 Leaky integrator time constant in s
The rate-map is demonstrated by the script DEMO_Ratemap and the corresponding plots are presented in Fig. 14. The IHC representation of a speech signal is shown in the left panel, using a bank of 64 gammatone filters spaced between 80 and 8000 Hz. The corresponding rate-map representation scaled in dB is presented in the right panel.
## Spectral features (spectralFeaturesProc.m)¶
In order to characterise the spectral content of the ear signals, a set of spectral features is available that can serve as a physical correlate to perceptual attributes, such as timbre and coloration [Peeters2011]. All spectral features summarise the spectral content of the rate-map representation across auditory filters and are computed for individual time frames. The following 14 spectral features are available:
1. 'centroid' : The spectral centroid represents the centre of gravity of the rate-map and is one of the most frequently-used timbre parameters [Tzanetakis2002], [Jensen2004], [Peeters2011]. The centroid is normalised by the highest rate-map centre frequency to reduce the influence of the gammatone parameters.
2. 'spread' : The spectral spread describes the average deviation of the rate-map around its centroid, which is commonly associated with the bandwidth of the signal. Noise-like signals have usually a large spectral spread, while individual tonal sounds with isolated peaks will result in a low spectral spread. Similar to the centroid, the spectral spread is normalised by the highest rate-map centre frequency, such that the feature value ranges between zero and one.
3. 'brightness' : The brightness reflects the amount of high frequency information and is measured by relating the energy above a pre-defined cutoff frequency to the total energy. This cutoff frequency is set to sf_br_cf = 1500 Hz by default [Jensen2004], [Peeters2011]. This feature might be used to quantify the sensation of sharpness.
4. 'high-frequency content' : The high-frequency content is another metric that measures the energy associated with high frequencies. It is derived by weighting each channel in the rate-map by its squared centre frequency and integrating this representation across all frequency channels [Jensen2004]. To reduce the sensitivity of this feature to the overall signal level, the high-frequency content feature is normalised by the rate-map integrated across-frequency.
5. 'crest' : The SCM is defined as the ratio between the maximum value and the arithmetic mean and can be used to characterise the peakiness of the rate-map. The feature value is low for signals with a flat spectrum and high for a rate-map with a distinct spectral peak [Peeters2011], [Lerch2012].
6. 'decrease' : The spectral decrease describes the average spectral slope of the rate-map representation, putting a stronger emphasis on the low frequencies [Peeters2011].
7. 'entropy' : The entropy can be used to capture the peakiness of the spectral representation [Misra2004]. The resulting feature is low for a rate-map with many distinct spectral peaks and high for a flat rate-map spectrum.
8. 'flatness' : The SFM is defined as the ratio of the geometric mean to the arithmetic mean and can be used to distinguish between harmonic (SFM is close to zero) and a noisy signals (SFM is close to one) [Peeters2011].
9. 'irregularity' : The spectral irregularity quantifies the variations of the logarithmically-scaled rate-map across frequencies [Jensen2004].
10. 'kurtosis' : The excess kurtosis measures whether the spectrum can be characterised by a Gaussian distribution [Lerch2012]. This feature will be zero for a Gaussian distribution.
11. 'skewness' : The spectral skewness measures the symmetry of the spectrum around its arithmetic mean [Lerch2012]. The feature will be zero for silent segments and high for voiced speech where substantial energy is present around the fundamental frequency.
12. 'roll-off' : Determines the frequency in Hz below which a pre-defined percentage sf_ro_perc of the total spectral energy is concentrated. Common values for this threshold are between sf_ro_perc = 0.85 [Tzanetakis2002] and sf_ro_perc = 0.95 [Scheirer1997], [Peeters2011]. The roll-off feature is normalised by the highest rate-map centre frequency and ranges between zero and one. This feature can be useful to distinguish voiced from unvoiced signals.
13. 'flux' : The spectral flux evaluates the temporal variation of the logarithmically-scaled rate-map across adjacent frames [Lerch2012]. It has been suggested to be useful for the distinction of music and speech signals, since music has a higher rate of change [Scheirer1997].
14. 'variation' : The spectral variation is defined as one minus the normalised correlation between two adjacent time frames of the rate-map [Peeters2011].
A list of all parameters is presented in Table 11.
Table 11 List of parameters related to 'spectral_features'.
Parameter Default Description
sf_requests 'all'
List of requested spectral features (e.g. 'flux'). Type
help spectralFeaturesProc in the Matlab command window
to display the full list of supported spectral features.
sf_br_cf 1500 Cut-off frequency in Hz for brightness feature
sf_ro_perc 0.85 Threshold (re. 1) for spectral roll-off feature
The extraction of spectral features is demonstrated by the script Demo_SpectralFeatures.m, which produces the plots shown in Fig. 15. The complete set of 14 spectral features is computed for the speech signal shown in the top left panel. Whenever the unit of the spectral feature was given in frequency, the feature is shown in black in combination with the corresponding rate-map representation.
## Onset strength (onsetProc.m)¶
According to [Bregman1990], common onsets and offsets across frequency are important grouping cues that are utilised by the human auditory system to organise and integrate sounds originating from the same source. The onset processor is based on the rate-map representation, and therefore, the choice of the rate-map parameters, as listed in Table 10, will influence the output of the onset processor. The temporal resolution is controlled by the window size rm_wSizeSec and the step size rm_hSizeSec, respectively. The amount of temporal smoothing can be adjusted by the leaky integrator time constant rm_decaySec, which reduces the amount of temporal fluctuations in the rate-map. Onset are detected by measuring the frame-based increase in energy of the rate-map representation. This detection is performed based on the logarithmically-scaled energy, as suggested by [Klapuri1999]. It is possible to limit the strength of individual onsets to an upper limit, which is by default set to ons_maxOnsetdB = 30. A list of all parameters is presented in Table 12.
Table 12 List of parameters related to 'onset_strength'
Parameter Default Description
ons_maxOnsetdB 30 Upper limit for onset strength in dB
The resulting onset strength expressed in decibel, which is a function of time frame and frequency channel, is shown in Fig. 16. The two figures can be replicated by running the script DEMO_OnsetStrength.m. When considering speech as an input signal, it can be seen that onsets appear simultaneously across a broad frequency range and typically mark the beginning of an auditory event.
## Offset strength (offsetProc.m)¶
Similarly to onsets, the strength of offsets can be estimated by measuring the frame-based decrease in logarithmically-scaled energy. As discussed in the previous section, the selected rate-map parameters as listed in Table 10 will influence the offset processor. Similar to the onset strength, the offset strength can be constrained to a maximum value of ons_maxOffsetdB = 30. A list of all parameters is presented in Table 12.
Table 13 List of parameters related to 'offset_strength'.
Parameter Default Description
ofs_maxOffsetdB 30 Upper limit for offset strength in dB
The offset strength is demonstrated by the script DEMO_OffsetStrength.m and the corresponding figures are depicted in Fig. 17. It can be seen that the overall magnitude of the offset strength is lower compared to the onset strength. Moreover, the detected offsets are less synchronised across frequency.
## Binary onset and offset maps (transientMapProc.m)¶
The information about sudden intensity changes, as represented by onsets or offsets, can be combined in order to organise and group the acoustic input according to individual auditory events. The required processing is similar for both onsets and offsets, and is summarised by the term transient detection. To apply this transient detection based on the onset strength or offset strength, the user should use the request name ’onset_map’ or ’offset_map’, respectively. Based on the transient strength which is derived from the corresponding onset strength and offset strength processor (described in Onset strength (onsetProc.m) and Offset strength (offsetProc.m), a binary decision about transient activity is formed, where only the most salient information is retained. To achieve this, temporal and across-frequency constraints are imposed for the transient information. Motivated by the observation that two sounds are perceived as separated auditory events when the difference in terms of their onset time is in the range of 20 ms – 40 ms [Turgeon2002], transients are fused if they appear within a pre-defined time context. If two transients appear within this time context, only the stronger one will be considered. This time context can be adjusted by trm_fuseWithinSec. Moreover, the minimum across-frequency context can be controlled by the parameters trm_minSpread. To allow for this selection, individual transients which are connected across multiple TF units are extracted using Matlab’s image labelling tool bwlabel . The binary transient map will only retain those transients which consists of at least trm_minSpread connected TF units. The salience of the cue can be specified by the detection thresholds trm_minStrengthdB. Whereas this thresholds control the required relative change, a global threshold excludes transient activity if the corresponding rate-map level is below a pre-defined threshold, as determined by trm_minValuedB. A summary of all parameters is given in Table 14.
Table 14 List of parameters related to 'onset_map' and 'offset_map'.
Parameter Default Description
trm_fuseWithinSec 30E-3 Time constant below which transients are fused
trm_minSpread 5 Minimum number of connected TF units
trm_minStrengthdB 3 Minimum onset strength in dB
trm_minValuedB -80 Minimum rate-map level in dB
To illustrate the benefit of selecting onset and offset information, a rate-map representation is shown in Fig. 18 (left panel), where the corresponding onsets and offsets detected by the transientMapProc, through two individual requests ’onset_map’ and ’offset_map’, and without applying any temporal or across-frequency constraints are overlaid (respectively in black and white). It can be seen that the onset and offset information is quite noisy. When only retaining the most salient onsets and offsets by applying temporal and across-frequency constraints (right panel), the remaining onsets and offsets can be used as temporal markers, which clearly mark the beginning and the end of individual auditory events.
## Pitch (pitchProc.m)¶
Following [Slaney1990], [Meddis2001], [Meddis1997], the sub-band periodicity analysis obtained by the ACF can be integrated across frequency by giving equal weight to each frequency channel. The resulting SACF reflects the strength of periodicity as a function of the lag period for a given time frame, as illustrated in Fig. 13. Based on the SACF representation, the most salient peak within the plausible pitch frequency range p_pitchRangeHz is detected for each frame in order to obtain an estimation of the fundamental frequency. In addition to the peak position, the corresponding amplitude of the SACF is used to reflect the confidence of the underlying pitch estimation. More specifically, if the SACF magnitude drops below a pre-defined percentage p_confThresPerc of its global maximum, the corresponding pitch estimate is considered unreliable and set to zero. The estimated pitch contour is smoothed across time frames by a median filter of order p_orderMedFilt, which aims at reducing the amount of octave errors. A list of all parameters is presented in Table 15. In the context of pitch estimation, it will be useful to experiment with the settings related to the non-linear pre-processing of the ACF, as described in Auto-correlation (autocorrelationProc.m).
Table 15 List of parameters related to 'pitch'.
Parameter Default Description
p_pitchRangeHz [80 400] Plausible pitch frequency range in Hz
p_confThresPerc 0.7 Confidence threshold related to the SACF magnitude
p_orderMedFilt 3 Order of the median filter
The task of pitch estimation is demonstrated by the script DEMO_Pitch and the corresponding SACF plots are presented in Fig. 19. The pitch is estimated for an anechoic speech signal (top left panel). The corresponding is presented in the top right panel, where each black cross represents the most salient lag period per time frame. The plausible pitch range is indicated by the two white dashed lines. The confidence measure of each individual pitch estimates is shown in the bottom left panel, which is used to set the estimated pitch to zero if the magnitude of the SACF is below the threshold. The final pitch contour is post-processed with a median filter and shown in the bottom right panel. Unvoiced frames, where no pitch frequency was detected, are indicated by NaN‘s.
## Medial Olivo-Cochlear (MOC) feedback (mocProc.m)¶
It has now been a well known fact that in the auditory system, an efferent pathway of fibers exists, originating from the auditory neurons in the olivary complex to the outer hair cells [Guinan2006]. This operates as a top-down feedback path, as opposed to the bottom-up peripheral signal transmission towards the brain, affecting the movement of the basilar membrane in response to the input stimulus. The MOC processor mimics this feedback, particularly originating from the medial part of the olivary complex. In Auditory front-end, this feedback is realised by monitoring the output from the ratemap processor which corresponds to the auditory neurons’ firing rate, and by controlling accordingly the nonlinear path gain of the DRNL processor which corresponds to the basilar membrane’s nonlinear operation. This approach is based on the work of [Clark2012], except that the auditory nerve processing model is simplified as the ratemap processor in Auditory front-end.
The input to the MOC processor is the time frame-frequency representation from the ratemap processor. This is then converted into an attenuation factor per each frequency channel. The constants for this rate-to-attenuation conversion are internal parameters of the processor, which can be set in accordance with various physiological findings such as those of [Liberman1988]. The amplitude relationship was adopted from the work of [Clark2012]. The time course and delay of the feedback activity, such as in the work of [Backus2006], can be approximated by adjusting the leaky integrator time constant rm_decaySec and the window step size rm_hSizeSec of the ratemap processor.
In addition to this so-called reflexive feedback, realised as a closed-loop operation, the reflective feedback is realised by means of additional control parameters that can be modified externally in an open-loop manner. The two parameters moc_mocIpsi and moc_mocContra are included for this purpose. Depending on applications, these two can be accessed and adjusted via the Blackboard system, and applied jointly with the reflexive feedback to the nonlinear path as the final multiplicative gain factor. Table 16 lists the parameters for the processor, including the above-mentioned two. The other two parameters moc_mocThresholdRatedB and moc_mocMaxAttenuationdB are specified such that the input level- attenuation relationship is fitted best to the data of [Liberman1988] which is scaled within a range of 0 dB to 40 dB by [Clark2012].
Table 16 List of parameters related to the auditory representation ’moc’.
Parameter Default Description
moc_mocIpsi 1 Ipsilateral MOC feedback factor (0 to 1)
moc_mocContra 1 Contralateral MOC feedback factor (0 to 1)
moc_mocThresholdRatedB -180 Threshold ratemap value for MOC activation in dB
moc_mocMaxAttenuationdB 40 Maximum possible MOC attenuation in dB
Fig. 20 shows, firstly on the left panel, the input-output characteristics of the MOC processor, using on-frequency stimulation from tones at 520 Hz and 3980 Hz, same as in the work of [Liberman1988]. As mentioned above, the relationship between the input level and the MOC attenuation activity through the ratemap representation was derived through curve fitting to the available data set of [Liberman1988], which is also shown on the plot. An example of input signal-DRNL output pair at 40 dB input level is shown on the right panel. The feedback applies an attenuation at the later part of the tone. These plots can be generated by running the script DEMO_MOC.m.
## Amplitude modulation spectrogram (modulationProc.m)¶
The detection of envelope fluctuations is a very fundamental ability of the human auditory system which plays a major role in speech perception. Consequently, computational models have tried to exploit speech- and noise specific characteristics of amplitude modulations by extracting so-called amplitude modulation spectrogram (AMS)features with linearly-scaled modulation filters [Kollmeier1994], [Tchorz2003], [Kim2009], [May2013a], [May2014a], [May2014b]. The use of linearly-scaled modulation filters is, however, not consistent with psychoacoustic data on modulation detection and masking in humans [Bacon1989], [Houtgast1989], [Dau1997a], [Dau1997b], [Ewert2000]. As demonstrated by [Ewert2000], the processing of envelope fluctuations can be described effectively by a second-order band-pass filter bank with logarithmically-spaced centre frequencies. Moreover, it has been shown that an AMS feature representation based on an auditory-inspired modulation filter bank with logarithmically-scaled modulation filters substantially improved the performance of computational speech segregation in the presence of stationary and fluctuating interferers [May2014c]. In addition, such a processing based on auditory-inspired modulation filters has recently also been successful in speech intelligibility prediction studies [Joergensen2011], [Joergensen2013]. To investigate the contribution of both AMS feature representations, the amplitude modulation processor can be used to extract linearly- and logarithmically-scaled AMS features. Therefore, each frequency channel of the IHC representation is analysed by a bank of modulation filters. The type of modulation filters can be controlled by setting the parameter ams_fbType to either ’lin’ or ’log’. To illustrate the difference between linear linearly-scaled and logarithmically-scaled modulation filters, the corresponding filter bank responses are shown in Fig. 21. The linear modulation filter bank is implemented in the frequency domain, whereas the logarithmically-scaled filter bank is realised by a band of second-order IIR Butterworth filters with a constant-Q factor of 1. The modulation filter with the lowest centre frequency is always implemented as a low-pass filter, as illustrated in the right panel of Fig. 21.
Similarly to the gammatone processor described in Gammatone (gammatoneProc.m), there are different ways to control the centre frequencies of the individual modulation filters, which depend on the type of modulation filters
• ams_fbType = 'lin'
1. Specify ams_lowFreqHz, ams_highFreqHz and ams_nFilter. The requested number of filters ams_nFilter will be linearly-spaced between ams_lowFreqHz and ams_highFreqHz. If ams_nFilter is omitted, the number of filters will be set to 15 by default.
• ams_fbType = 'log'
1. Directly define a vector of centre frequencies, e.g. ams_cfHz = [4 8 16 ...]. In this case, the parameters ams_lowFreqHz, ams_highFreqHz, and ams_nFilter are ignored.
2. Specify ams_lowFreqHz and ams_highFreqHz. Starting at ams_lowFreqHz, the centre frequencies will be logarithmically-spaced at integer powers of two, e.g. 2^2, 2^3, 2^4 ... until the higher frequency limit ams_highFreqHz is reached.
3. Specify ams_lowFreqHz, ams_highFreqHz and ams_nFilter. The requested number of filters ams_nFilter will be spaced logarithmically as power of two between ams_lowFreqHz and ams_highFreqHz.
The temporal resolution at which the AMS features are computed is specified by the window size ams_wSizeSec and the step size ams_hSizeSec. The window size is an important parameter, because it determines how many periods of the lowest modulation frequencies can be resolved within one individual time frame. Moreover, the window shape can be adjusted by ams_wname. Finally, the IHC representation can be downsampled prior to modulation analysis by selecting a downsampling ratio ams_dsRatio larger than 1. A full list of AMS feature parameters is shown in Table 17.
Table 17 List of parameters related to 'ams_features'.
Parameter Default Description
ams_fbType 'log' Filter bank type ('lin' or 'log')
ams_nFilter [] Number of modulation filters (integer)
ams_lowFreqHz 4 Lowest modulation filter centre frequency in Hz
ams_highFreqHz 1024 Highest modulation filter centre frequency in Hz
ams_cfHz [] Vector of modulation filter centre frequencies in Hz
ams_dsRatio 4 Downsampling ratio of the IHC representation
ams_wSizeSec 32E-3 Window duration in s
ams_hSizeSec 16E-3 Window step size in s
ams_wname 'rectwin' Window name
The functionality of the AMS feature processor is demonstrated by the script DEMO_AMS and the corresponding four plots are presented in Fig. 22. The time domain speech signal (top left panel) is transformed into a IHC representation (top right panel) using 23 frequency channels spaced between 80 and 8000 Hz. The linear and the logarithmic AMS feature representations are shown in the bottom panels. The response of the modulation filters are stacked on top of each other for each IHC frequency channel, such that the AMS feature representations can be read like spectrograms. It can be seen that the linear AMS feature representation is more noisy in comparison to the logarithmically-scaled AMS features. Moreover, the logarithmically-scaled modulation pattern shows a much higher correlation with the activity reflected in the IHC representation.
## Spectro-temporal modulation spectrogram¶
Neuro-physiological studies suggest that the response of neurons in the primary auditory cortex of mammals are tuned to specific spectro-temporal patterns [Theunissen2001], [Qiu2003]. This response characteristic of neurons can be described by the so-called STRF. As suggested by [Qiu2003], the STRF can be effectively modelled by two-dimensional (2D) Gabor functions. Based on these findings, a spectro-temporal filter bank consisting of 41 Gabor filters has been designed by [Schaedler2012]. This filter bank has been optimised for the task of ASR, and the respective real parts of the 41 Gabor filters is shown in Fig. 23.
The input is a log-compressed rate-map with a required resolution of 100 Hz, which corresponds to a step size of 10 ms. To reduce the correlation between individual Gabor features and to limit the dimensions of the resulting Gabor feature space, a selection of representative rate-map frequency channels will be automatically performed for each Gabor filter [Schaedler2012]. For instance, the reference implementation based on 23 frequency channels produces a 311 dimensional Gabor feature space.
The Gabor feature processor is demonstrated by the script DEMO_GaborFeatures.m, which produces the two plots shown in Fig. 24. A log-compressed rate-map with 25 ms time frames and 23 frequency channels spaced between 124 and 3657 Hz is shown in the left panel for a speech signal. These rate-map parameters have been adjusted to meet the specifications as recommended in the ETSI standard [ETSIES]. The corresponding Gabor feature space with 311 dimension is presented in the right panel, where vowel transition (e.g. at time frames around 0.2 s) are well captured. This aspect might be particularly relevant for the task of ASR.
## Cross-correlation (crosscorrelationProc.m)¶
The IHC representations of the left and the right ear signals is used to compute the normalised CCF in the FFT domain for short time frames of cc_wSizeSec duration with a step size of cc_hSizeSec. The CCF is normalised by the auto-correlation sequence at lag zero. This normalised CCF is then evaluated for time lags within cc_maxDelaySec (e.g., [-1 ms, 1 ms]) and is thus a three-dimensional function of time frame, frequency channel and lag time. An overview of all CCF parameters is given in Table 18. Note that the choice of these parameters will influence the computation of the ITD and the IC processors, which are described in Interaural time differences (itdProc.m) and Interaural coherence (icProc.m), respectively.
Table 18 List of parameters related to 'crosscorrelation'.
Parameter Default Description
cc_wname 'hann' Window type
cc_wSizeSec 0.02 Window duration in s
cc_hSizeSec 0.01 Window step size in s
cc_maxDelaySec 0.0011 Maximum delay in s considered in CCF computation
The script DEMO_Crosscorrelation.m demonstrates the functionality of the CCF function and the resulting plots are shown in Fig. 25. The left panel shows the ear signals for a speech source that is located closer to the right ear. As result, the left ear signal is smaller in amplitude and is delayed in comparison to the right ear signal. The corresponding CCF is shown in the right panel for 32 auditory channels, where peaks are centred around positive time lags, indicating that the source is closer to the right ear. This is even more evident by looking at the SCCF, as shown in the bottom right panel.
## Interaural time differences (itdProc.m)¶
The ITD between the left and the right ear signal is estimated for individual frequency channels and time frames by locating the time lag that corresponds to the most prominent peak in the normalised CCF. This estimation is further refined by a parabolic interpolation stage [May2011], [May2013b]. The ITD processor does not have any adjustable parameters, but it relies on the CCF described in Cross-correlation (crosscorrelationProc.m) and its corresponding parameters (see Table 18). The ITD representation is computed by using the request entry ’itd’.
The ITD processor is demonstrated by the script DEMO_ITD.m, which produces two plots as shown in Fig. 26. The ear signals for a speech source that is located closer to the right ear are shown in the left panel. The corresponding ITD estimation is presented for each individual TF unit (right panel). Apart from a few estimation errors, the estimated ITD between both ears is in the range of 0.5 ms for the majority of TF units.
## Interaural level differences (ildProc.m)¶
The ILD is estimated for individual frequency channels by comparing the frame-based energy of the left and the right-ear IHC representations. The temporal resolution can be controlled by the frame size ild_wSizeSec and the step size ild_hSizeSec. Moreover, the window shape can be adjusted by the parameter ild_wname. The resulting ILD is expressed in dB and negative values indicate a sound source positioned at the left-hand side, whereas a positive ILD corresponds to a source located at the right-hand side. A full list of parameters is shown in Table 19.
Table 19 List of parameters related to 'ild'.
Parameter Default Description
ild_wSizeSec 20E-3 Window duration in s
ild_hSizeSec 10E-3 Window step size in s
ild_wname 'hann' Window name
The ILD processor is demonstrated by the script DEMO_ILD.m and the resulting plots are presented in Fig. 27. The ear signals are shown for a speech source that is more closely located to the right ear (left panel). The corresponding ILD estimates are presented for individual TF units. It is apparent that the change considerably as a function of the centre frequency. Whereas hardly any ILDs are observed for low frequencies, a strong influence can be seen at higher frequencies where ILDs can be as high as 30 dB.
## Interaural coherence (icProc.m)¶
The IC is estimated by determining the maximum value of the normalised CCF. It has been suggested that the IC can be used to select TF units where the binaural cues (ITDs and ILDs) are dominated by the direct sound of an individual sound source, and thus, are likely to reflect the true location of one of the active sources [Faller2004]. The IC processor does not have any controllable parameters itself, but it depends on the settings of the CCF processor, which is described in Cross-correlation (crosscorrelationProc.m). The IC representation is computed by using the request entry ’ic’.
The application of the IC processor is demonstrated by the script DEMO_IC, which produces the following four plots shown in Fig. 28. The top left and bottom left panels show the anechoic and reverberant speech signal, respectively. It can be seen that the time domain signal is smeared due to the influence of the reverberation. The IC for the anechoic signal is close to one for most of the individual TF units, which indicates that the corresponding binaural cues are reliable. In contrast, the IC for the reverberant signal is substantially lower for many TF units, suggesting that the corresponding binaural cues might be unreliable due to the impact of the reverberation.
## Precedence effect (precedenceProc.m)¶
The precedence effect describes the ability of humans to fuse and localize the sound based on the first-arriving parts, in the presence of its successive version with a time delay below an echo-generating threshold [Wallach1949]. The effect of the later-arriving sound is suppressed by the first part in the localization process. The precedence effect processor in Auditory front-end models this, with the strategy based on the work of [Braasch2013]. The processor detects and removes the lag from a binaural input signal with a delayed repetition, by means of an autocorrelation mechanism and deconvolution. Then it derives the ITD and ILD based on these lag-removed signals.
The input to the precedence effect processor is a binaural time-frequency signal chunk from the gammatone filterbank. Then for each chunk a pair of ITD and ILD values is calculated as the output, by integrating the ITDs and ILDs across the frequency channels according to the weighted-image model [Stern1988], and through amplitude-weighted summation. Since these ITD/ILD calculation methods of the precedence effect processor are different from what are used for the Auditory front-end ITD and ILD processors, the Auditory front-end ITD and ILD processors are not connected to the precedence effect processor. Instead the steps for the correlation analyses and the ITD/ILD calculation are coded inside the processor as its own specific techniques. Table 20 lists the parameters needed to operate the precedence effect processor.
Table 20 List of parameters related to the auditory representation ’precedence’.
Parameter Default Description
prec_wSizeSec 20E-3 Window duration in s
prec_hSizeSec 10E-3 Window step size in s
prec_maxDelaySec 10E-3 Maximum delay in s for autocorrelation computation
Fig. 29 shows the output from a demonstration script DEMO_precedence.m. The input signal is a 800-Hz wide bandpass noise of 400 ms length, centered at 500 Hz, mixed with a reflection that has a 2 ms delay, and made binaural with an ITD of 0.4 ms and a 0-dB ILD. During the processing, windowed chunks are used as the input, with the length of 20 ms. It can be seen that after some initial confusion, the processor estimates the intended ITD and ILD values as more chunks are analyzed.
[Backus2006] Backus, B. C. and Guinan, J. J. (2006), “Time-course of the human medial olivocochlear reflex,” The Journal of the Acoustical Society of America 119(5 Pt 1), pp. 2889–2904.
[Bacon1989] Bacon, S. P. and Grantham, D. W. (1989), “Modulation masking: Effects of modulation frequency, depths, and phase,” Journal of the Acoustical Society of America 85(6), pp. 2575–2580.
[Bernstein1996] Bernstein, L. R. and Trahiotis, C. (1996), “The normalized correlation: Accounting for binaural detection across center frequency,” Journal of the Acoustical Society of America 100(6), pp. 3774–3784.
[Bernstein1999] (1, 2) Bernstein, L. R., van de Par, S., and Trahiotis, C. (1999), “The normalized interaural correlation: Accounting for NoS thresholds obtained with Gaussian and “low-noise” masking noise,” Journal of the Acoustical Society of America 106(2), pp. 870–876.
[Braasch2013] Braasch, J. (2013), “A precedence effect model to simulate localization dominance using an adaptive, stimulus parameter-based inhibition process.” The Journal of the Acoustical Society of America 134(1), pp. 420–35.
[Breebart2001] Breebaart, J., van de Par, S., and Kohlrausch, A. (2001), “Binaural processing model based on contralateral inhibition. I. Model structure,” Journal of the Acoustical Society of America 110(2), pp. 1074–1088.
[Bregman1990] Bregman, A. S. (1990), Auditory scene analysis: The perceptual organization of sound, The MIT Press, Cambridge, MA, USA.
[Brown1994] Brown, G. J. and Cooke, M. P. (1994), “Computational auditory scene analysis,” Computer Speech and Language 8(4), pp. 297–336.
[Brown2010] Brown, G. J., Ferry, R. T., and Meddis, R. (2010), “A computer model of auditory efferent suppression: implications for the recognition of speech in noise.” The Journal of the Acoustical Society of America 127(2), pp. 943–54.
[Clark2012] (1, 2, 3) Clark, N. R., Brown, G. J., Jürgens, T., and Meddis, R. (2012), “A frequency-selective feedback model of auditory efferent suppression and its implications for the recognition of speech in noise.” Journal of the Acoustical Society of America 132(3), pp. 1535–1541.
[Cooke2001] Cooke, M., Green, P., Josifovski, L., and Vizinho, A. (2001), “Robust automatic speech recognition with missing and unreliable acoustic data,” Speech Communication 34(3), pp. 267–285.
[Dau1996] (1, 2, 3, 4, 5) Dau, T., Püschel, D., and Kohlrausch, A. (1996), “A quantitative model of the “effective” signal processing in the auditory system. I. Model structure,” Journal of the Acoustical Society of America 99(6), pp. 3615–3622.
[Dau1997a] (1, 2, 3) Dau, T., Püschel, D., and Kohlrausch, A. (1997a), “Modeling auditory processing of amplitude modulation. I. Detection and masking with narrow-band carriers,” Journal of the Acoustical Society of America 102(5), pp. 2892–2905.
[Dau1997b] Dau, T., Püschel, D., and Kohlrausch, A. (1997b), “Modeling auditory processing of amplitude modulation. II. Spectral and temporal integration,” Journal of the Acoustical Society of America 102(5), pp. 2906–2919.
[ETSIES] ETSI ES 201 108 v1.1.3 (2003), “Speech processing, transmission and quality aspects (STQ); distributed speech recognition; front-end feature extraction algorithm; compression algorithms,” http://www.etsi.org.
[Ewert2000] (1, 2) Ewert, S. D. and Dau, T. (2000), “Characterizing frequency selectivity for envelope fluctuations,” Journal of the Acoustical Society of America 108(3), pp. 1181–1196.
[Faller2004] Faller, C. and Merimaa, J. (2004), “Source localization in complex listening situations: Selection of binaural cues based on interaural coherence,” Journal of the Acoustical Society of America 116(5), pp. 3075–3089.
[Ferry2007] Ferry, R. T. and Meddis, R. (2007), “A computer model of medial efferent suppression in the mammalian auditory system,” The Journal of the Acoustical Society of America 122(6), pp. 3519.
[Glasberg1990] (1, 2) Glasberg, B. R. and Moore, B. C. J. (1990), “Derivation of auditory filter shapes from notched-noise data,” Hearing Research 47(1-2), pp. 103–138.
[Godde1994] Goode, R. L., Killion, M., Nakamura, K., and Nishihara, S. (1994), “New knowledge about the function of the human middle ear: development of an improved analog model.” The American journal of otology 15(2), pp. 145–154.
[Guinan2006] Guinan, J. J. (2006), “Olivocochlear efferents: anatomy, physiology, function, and the measurement of efferent effects in humans.” Ear and hearing 27(6), pp. 589–607, http://www.ncbi.nlm.nih.gov/pubmed/17086072.
[Houtgast1989] Houtgast, T. (1989), “Frequency selectivity in amplitude-modulation detection,” Journal of the Acoustical Society of America 85(4), pp. 1676–1680.
[Jensen2004] (1, 2, 3, 4) Jensen, K. and Andersen, T. H. (2004), “Real-time beat estimation using feature extraction,” in Computer Music Modeling and Retrieval, edited by U. K. Wiil, Springer, Berlin– Heidelberg, Lecture Notes in Computer Science, pp. 13–22.
[Jepsen2008] (1, 2, 3, 4, 5) Jepsen, M. L., Ewert, S. D., and Dau, T. (2008), “A computational model of human auditory signal processing and perception.” Journal of the Acoustical Society of America 124(1), pp. 422–438.
[Joergensen2011] (1, 2) Jørgensen, S. and Dau, T. (2011), “Predicting speech intelligibility based on the signal-to-noise envelope power ratio after modulation-frequency selective processing,” Journal of the Acoustical Society of America 130(3), pp. 1475–1487.
[Joergensen2013] Jørgensen, S., Ewert, S. D., and Dau, T. (2013), “A multi-resolution envelope-power based model for speech intelligibility,” Journal of the Acoustical Society of America 134(1), pp. 1–11.
[Kim2009] Kim, G., Lu, Y., Hu, Y., and Loizou, P. C. (2009), “An algorithm that improves speech intelligibility in noise for normal-hearing listeners,” Journal of the Acoustical Society of America 126(3), pp. 1486–1494.
[Klapuri1999] Klapuri, A. (1999), “Sound onset detection by applying psychoacoustic knowledge,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3089–3092.
[Kollmeier1994] Kollmeier, B. and Koch, R. (1994), “Speech enhancement based on physiological and psychoacoustical models of modulation perception and binaural interaction,” Journal of the Acoustical Society of America 95(3), pp. 1593–1602.
[Lerch2012] (1, 2, 3, 4) Lerch, A. (2012), An Introduction to Audio Content Analysis: Applications in Signal Processing and Music Informatics, John Wiley & Sons, Hoboken, NJ, USA.
[Liberman1988] (1, 2, 3, 4) Liberman, M. C. (1988), “Response properties of cochlear efferent neurons: monaural vs. binaural stimulation and the effects of noise,” Journal of Neurophysiology 60(5), pp. 1779–1798, http://jn.physiology.org/content/60/5/1779.
[Licklider1951] Licklider, J. C. R. (1951), “A duplex theory of pitch perception,” Experientia (4), pp. 128–134.
[Lopez-Poveda2001] (1, 2, 3) Lopez-Poveda, E. A. and Meddis, R. (2001), “A human nonlinear cochlear filterbank,” Journal of the Acoustical Society of America 110(6), pp. 3107–3118.
[May2011] May, T., van de Par, S., and Kohlrausch, A. (2011), “A probabilistic model for robust localization based on a binaural auditory front-end,” IEEE Transactions on Audio, Speech, and Language Processing 19(1), pp. 1–13.
[May2012] May, T., van de Par, S., and Kohlrausch, A. (2012), “Noise-robust speaker recognition combining missing data techniques and universal background modeling,” IEEE Transactions on Audio, Speech, and Language Processing 20(1), pp. 108–121.
[May2013a] May, T. and Dau, T. (2013), “Environment-aware ideal binary mask estimation using monaural cues,” in IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pp. 1–4.
[May2013b] May, T., van de Par, S., and Kohlrausch, A. (2013), “Binaural Localization and Detection of Speakers in Complex Acoustic Scenes,” in The technology of binaural listening, edited by J. Blauert, Springer, Berlin–Heidelberg–New York NY, chap. 15, pp. 397–425.
[May2014a] May, T. and Dau, T. (2014), “Requirements for the evaluation of computational speech segregation systems,” Journal of the Acoustical Society of America 136(6), pp. EL398– EL404.
[May2014b] May, T. and Gerkmann, T. (2014), “Generalization of supervised learning for binary mask estimation,” in International Workshop on Acoustic Signal Enhancement, Antibes, France.
[May2014c] May, T. and Dau, T. (2014), “Computational speech segregation based on an auditory-inspired modulation analysis,” Journal of the Acoustical Society of America 136(6), pp. 3350-3359.
[Meddis1991] Meddis, R. and Hewitt, M. J. (1991), “Virtual pitch and phase sensitivity of a computer model of the auditory periphery. I: Pitch identification,” Journal of the Acoustical Society of America 89(6), pp. 2866–2882.
[Meddis1997] Meddis, R. and O’Mard, L. (1997), “A unitary model of pitch perception,” Journal of the Acoustical Society of America 102(3), pp. 1811–1820.
[Meddis2001] (1, 2, 3) Meddis, R., O’Mard, L. P., and Lopez-Poveda, E. A. (2001), “A computational algorithm for computing nonlinear auditory frequency selectivity,” Journal of the Acoustical Society of America 109(6), pp. 2852–2861.
[Misra2004] Misra, H., Ikbal, S., Bourlard, H., and Hermansky, H. (2004), “Spectral entropy based feature for robust ASR,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 193–196.
[Peeters2011] (1, 2, 3, 4, 5, 6, 7, 8) Peeters, G., Giordano, B. L., Susini, P., Misdariis, N., and McAdams, S. (2011), “The timbre toolbox: Extracting audio descriptors from musical signals.” Journal of the Acoustical Society of America 130(5), pp. 2902–2916.
[Pueschel1988] (1, 2) Püschel, D. (1988), “Prinzipien der zeitlichen Analyse beim Hören,” Ph.D. thesis, University of Göttingen.
[Qiu2003] (1, 2) Qiu, A., Schreiner, C. E., and Escabì, M. A. (2003), “Gabor analysis of auditory midbrain receptive fields: Spectro-temporal and binaural composition.” Journal of Neurophysiology 90(1), pp. 456–476.
[Rabiner1977] Rabiner, L. R. (1977), “On the use of autocorrelation analysis for pitch detection,” IEEE Transactions on Audio, Speech, and Language Processing 25(1), pp. 24–33.
[Schaedler2012] (1, 2) Schädler, M. R., Meyer, B. T., and Kollmeier, B. (2012), “Spectro-temporal modulation subspace-spanning filter bank features for robust automatic speech recognition,” Journal of the Acoustical Society of America 131(5), pp. 4134–4151.
[Scheirer1997] (1, 2) Scheirer, E. and Slaney, M. (1997), “Construction and evaluation of a robust multifeature speech/music discriminator,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1331–1334.
[Slaney1990] Slaney, M. and Lyon, R. F. (1990), “A perceptual pitch detector,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 357–360.
[Smith1977] Smith, R. L. (1977), “Short-term adaptation in single auditory nerve fibers: some poststimulatory effects,” J Neurophysiol 40(5), pp. 1098–1111.
[Smith1983] Smith, R. L., Brachman, M. L., and Goodman, D. a. (1983), “Adaptation in the Auditory Periphery,” Annals of the New York Academy of Sciences 405(1 Cochlear Pros), pp. 79–93.
[Soendergaard2013] (1, 2, 3, 4, 5) Søndergaard, P. L. and Majdak, P. (2013), “The auditory modeling toolbox,” in The Technology of Binaural Listening, edited by J. Blauert, Springer, Heidelberg–New York NY–Dordrecht–London, chap. 2, pp. 33–56.
[Stern1988] Stern, R. M., Zeiberg, A. S., and Trahiotis, C. (1988), “Lateralization of complex binaural stimuli: A weighted-image model,” The Journal of the Acoustical Society of America 84(1), pp. 156–165, http://scitation.aip.org/content/asa/journal/jasa/84/1/10.1121/1.396982.
[Tchorz2003] (1, 2) Tchorz, J. and Kollmeier, B. (2003), “SNR estimation based on amplitude modulation analysis with applications to noise suppression,” IEEE Transactions on Audio, Speech, and Language Processing 11(3), pp. 184–192.
[Theunissen2001] Theunissen, F. E., David, S. V., Singh, N. C., Hsu, A., Vinje, W. E., and Gallant, J. L. (2001), “Estimating spatio-temporal receptive fields of auditory and visual neurons from their responses to natural stimuli,” Network: Computation in Neural Systems 12, pp. 289–316.
[Tolonen2000] (1, 2) Tolonen, T. and Karjalainen, M. (2000), “A computationally efficient multipitch analysis model,” IEEE Transactions on Audio, Speech, and Language Processing 8(6), pp. 708–716.
[Turgeon2002] Turgeon, M., Bregman, A. S., and Ahad, P. A. (2002), “Rhythmic masking release: Contribution of cues for perceptual organization to the cross-spectral fusion of concurrent narrow-band noises,” Journal of the Acoustical Society of America 111(4), pp. 1819–1831.
[Tzanetakis2002] (1, 2) Tzanetakis, G. and Cook, P. (2002), “Musical genre classification of audio signals,” IEEE Transactions on Audio, Speech, and Language Processing 10(5), pp. 293–302.
[Wallach1949] Wallach, H., Newman, E. B., and Rosenzweig, M. R. (1949), “The Precedence Effect in Sound Localization,” The American Journal of Psychology 62(3), pp. 315–336, http://www.jstor.org/stable/1418275.
[Wang2006] Wang, D. L. and Brown, G. J. (Eds.) (2006), Computational Auditory Scene Analysis: Principles, Algorithms and Applications, Wiley / IEEE Press.
[Young2006] Young, S., Evermann, G., Gales, M., Hain, T., Kershaw, D., Liu, X., Moore, G., Odell, J., Ollason, D., Povey, D., Valtchev, V., and Woodland, P. (2006), The HTK Book (for HTK Version 3.4), Cambridge University Engineering Department, http://htk.eng.cam.ac.uk. | 2017-11-19 10:23:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4842902719974518, "perplexity": 2548.054348631829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805541.30/warc/CC-MAIN-20171119095916-20171119115916-00535.warc.gz"} |
https://eepower.com/semiconductors/1200v-hyperfast-diodes-and-their-applications-ween-semiconductors-1011 | # 1200V Hyperfast Diodes and Their Applications
In our modern society, electricity is becoming an ever more important resource of energy. Common examples of energy users and producers are computing equipment (desktop computer, tablets, but also server farms including un-interruptible power supplies), communication equipment (mobile phones, base stations), electric vehicles (traction and battery charging), PV energy harvesting equipment (solar energy farms) and wind turbines (wind farms). Handling the electric energy efficiently has become essential; 1200V diodes can help.
## High voltage
It is a well-known fact in electric energy transfer that Ohmic losses in low voltage systems have a larger impact on efficiency than in high voltage systems. It is for that reason that new standards for mobile phone and tablet charging have been developed (USB-PD) that allow these devices to be charged from 9V, 12V or even 20V sources, where in the early days 5V was the standard.
Of course, mobile phones and tablets are more or less low power systems (in the order of 1 to 10 Watts), but a similar tendency can be observed in higher power systems (1 kW and above).
High power electric power conversion systems like UPS, PV inverters and EV chargers are commonly built using DC/DC converter building blocks like Boost/Flyback Converters, Buck/Forward Converters and Resonant Converters. Flyback and Forward Converters are basically the isolated counterparts of the Boost and the Buck Converter respectively.
Traditionally the highest DC voltage levels in this kind of systems used to be around 400V. In many new high power systems an internal DC voltage of 700V or above is used in order to benefit from efficiency advantages that can more easily be realized by using higher voltage. Basically for the same reason why many modern mobile phones use USB-PD voltage levels of in the order of 12V and not the traditional 5V USB level.
Fig. 1 Block diagram of a UPS system with various DC/DC building blocks
## Hard switching, soft switching and power loss resulting from diode behavior
When it comes to the switching behavior of DC/DC converters, there’s essentially just hard switching and soft switching. Bipolar pn-junction diodes behave differently under hard switching and soft switching circumstances. This can be illustrated by examining the operation of a boost converter in continuous conduction mode (CCM) and of an LLC resonant converter. The CCM boost converter is essentially hard switching, the LLC resonant converter is essentially soft switching.
### Hard switching
Fig. 2 Principle diagram of a boost converter
A CCM boost converter is often used as the power factor correction (PFC) circuit in a power converter system.
Fig. 3 Current waveform in the diode of a CCM boost converter (simulated); black = IL1, red = ID1
When, at time t1, switch Q1 is closed the current in inductor L1 (IL1) builds up while the current in diode D1 (ID1) stops to flow. When Q1 is opened at time t2, the current starts to flow through the diode. At the moment that D1 must start to conduct the current (at time t2) only a low concentration of charge carriers (electrons and holes) is present in the drift region of the diode. That makes the initial impedance of the diode relatively high, which results in a high voltage drop (Vfr) across the diode. After a certain time (tfr – mostly in the order of 10 to 100 ns) sufficient charge carriers have been injected into the drift region, the impedance of the diode drops dramatically and the voltage drop across the diode is reduced to the static VF level for the specific forward current. The energy loss due to diode switch-on (switch-on loss) can be approximated by:
E_("sw-on")=1/2*I_F(t_2)*V_("fr")*t_("fr")
The energy Esw-on is completely dissipated in the diode itself.
After switching on, the current through the diode continues to flow and ramps down. Ramp down continues until switch Q1 is closed again. The energy dissipation during the diode conduction period is:
E_("cond")=int_(t_2)^(t_1) V_F(I_F)*I_F(t)*dt
Which can be estimated to be approximately equal to:
E_("cond")=V_F'*I_F'*(t_1-t_2)
Where VF' and IF' are the average VF and IF levels respectively. All conduction loss is dissipated in the diode.
At the moment Q1 is closed the sequence repeats.
The current that flows in diode D1 at the moment it’s being switched off (t1) significantly deviates from zero. Under that circumstance a bipolar diode cannot block the current instantaneously. In a bipolar diode the stored charge in the drift region must be removed before the diode can block the current flow. The reverse current associated with the extraction of the stored charge can clearly be recognized in Figure 3. Removal of the stored charge (Qs) leads to power loss: switch-off loss (Esw-off). The power loss associated with switching off is proportional to the voltage trajectory that the stored charge need to travel; in a normal boost converter that voltage trajectory is equal to the output voltage of the boost converter (Vout); namely the stored charge was initially at Vout level and is ‘transported’ to ground potential (0 V) because Q1 is closed.
E_("sw-off")=V_("out")*Q_s
The stored charge Qs is the product of the current flowing in the diode (IF) and the (ambipolar) charge carrier lifetime τa.
Q_s=I_F*Τ_a
Combining the above two equations, and knowing that switch-off occurs at t1, the expression for the switch-off loss is:
E_("sw-off")=V_("out")*I_F(t_1)*Τ_a
The energy Esw-off is normally only partially dissipated in the diode itself; generally a lot of the energy will be dissipated in switch Q1.
The ambipolar charge carrier lifetime τa is not a constant; lifetime decreases with current density in the silicon device. That makes it interesting to consider to use a smaller diode in order to reduce power loss in the system as a whole, especially when switching loss happens to be already dominant over conduction loss. Although the conduction loss will increase when a smaller diode is applied in an application, that loss may be more than compensated by the reduction of switching loss. See also the text box “Conduction loss versus switching loss”.
Charge carrier lifetime increases with temperature. Therefore it makes sense to try to keep the operating temperature of a bipolar diode low in order to keep switching loss low.
### Soft switching
Fig. 4 Principle diagram of an LLC resonant converter
An LLC resonant converter can often be found as a building block in a UPS or PV inverter. The switches Q1 and Q2, together with L1 (the magnetization inductance of the transformer), L2 (the leakage inductance of the transformer) and C1 (the series capacitance of resonating circuit) create a sinusoidal (or piecewise sinusoidal) current flowing out of the secondary side of the transformer. That sinusoidal current is rectified by the diode bridge (D1, D2, D3, D4), causing a DC voltage to result across the output buffer capacitor C2. The current that flows in diode pair D1 and D4 (and in the pair D2 and D3) is essentially zero when the diodes are switching on and switching off.
Fig. 5 Current waveforms in the diodes of an LLC resonant converter (simulated); blue = ID1 and ID4, red = ID2 and ID3
Because the diodes switch on at zero-crossing, diode turn-on losses are much lower than in a hard switching topology – the Vfr voltage overshoot is much lower and sometimes not even detectable. For switch-on power loss the same equation (1) applies as for a hard switching topology, but in a soft switching topology IF is close to zero, so the switch-on losses are nearly zero.
Diode switch-off losses are also much lower because the forward current level is approaching zero when the diode needs to turn off. Again the same equation (3 or 5) applies, but IF is substantially lower than in a hard switching topology. So that makes switch-off losses much lower as well, which can easily be recognized in the diode’s reverse recovery current magnitude in figure 5 – the reverse recovery current only slightly drops below zero.
The main requirement for the diode is that it must be fast enough to keep pace with the switching frequency of the (LLC resonant) power converter.
One component in the energy loss cannot be avoided: conduction loss. Also in soft switching topologies the conduction loss is given by the second equation.
Because switching losses play a less significant role in soft switching topologies the same (ultrafast/hyperfast) bipolar diodes can be used up to much higher switching frequencies in soft switching topologies than in hard switching topologies.
## 1200V diodes
Hyperfast bipolar diodes need lifetime control in order to make them switch fast (see also the text box “Lifetime control”) – in principle there’s no fundamental difference between 600V diodes and 1200V diodes. But, compared to 600V diodes, 1200V diodes require a wider drift region/depletion region in order to cope with the 1200V reverse voltage. The consequence of that wider region is that stored charge extraction (at the moment that the diode should turn off) takes longer. In order to make a 1200V diode just as fast as a 600V diode, it needs the lifetime of the charge carriers to be reduced even further. This additional carrier lifetime reduction unfortunately also affects the forward voltage drop of the diode: VF will rise and consequently conduction loss will be higher. It is for this reason that 600V Hyperfast diodes have trr values specified in the order of 20 ns where 1200V Hyperfast diodes have trr values in the order of 60 ns.
Where picking the right balance between conduction loss and switching loss in 600V diodes was already a challenge, it is even more challenging for 1200V diodes.
### Leakage and high operating temperature capability
High reverse voltages across a bipolar diode’s terminals will cause a leakage current to flow. Hyperfast diodes need a high concentration of recombination centers (see also the text box “Lifetime control”) in order to give the device its fast switching properties, but unfortunately these recombination centers do also operate as generation centers that contribute to a higher leakage current. Furthermore, when the operating temperature of a Hyperfast diode rises the activity of the generation centers increases, which leads to higher leakage current.
When a Hyperfast diode must be able to operate reliably at high temperature, it is essential that the leakage current does not rise to a level where the dissipation because of leakage could result in thermal runaway of the device. In order to achieve that a lifetime control method should be used that gives the Hyperfast diode these desired properties. The traditional so-called “Gold-kill” process does usually not allow the resulting Hyperfast diodes to be used above an operating temperature of 150 °C. An enhanced so-called “Platinum-kill” process delivers Hyperfast diodes that can be used up to temperatures of 175 °C and is therefore preferred for manufacturing Hyperfast low-leakage diodes that are capable of operating at high temperature.
Using the right lifetime control and choosing the right balance between conduction losses and switching losses results in 1200V Hyperfast diodes that enable cost effective and efficient high power/high voltage switched mode power conversion systems.
### Conduction loss versus switching loss
In all semiconductor switches (diodes, BJTs, MOSFETs, etc.) two aspects are generally dominant when it comes to power loss: current conduction and switching.
When a semiconductor switch is in the on-state, the current that flows through the device causes a voltage drop across the device. The product of voltage and current is the power loss that occurs in the on-state. On-state conduction losses can be reduced by making the semiconductor switch larger: use more silicon.
Switching losses occur when the ‘charged-state’ of a semiconductor device must be changed because the device is required to go from the off-state to the on-state and vice versa. The amount of charge transport involved determines the energy loss per switching cycle. One way of reducing switching energy loss is by making the semiconductor switch smaller: uses less silicon. A second way to reduce switching energy loss is to select a device type that requires less charge transport for the off- to on-state transition and vice versa. Unfortunately the switches that require less charge transport commonly exhibit higher on-state losses.
Furthermore, switching energy loss per unit time (that is : power loss) can be reduced by reducing the number of switching cycles per unit time (that is: use a low switching frequency). But in most switching applications the switching frequency or switching frequency range is dictated by external factors, so very often there’s only limited freedom in selecting a switching frequency (range).
A trade-off must be made between conduction loss and switching loss and the goal is to arrive at minimum total power loss. That means that the optimum device must be chosen depending on the application. In an application that switches at a very low frequency a big device that requires a lot of charge transport for switching usually gives the lowest power loss. But when switching frequency goes up, switching losses increase and the optimum device will either be a smaller switch or a switch that requires lower charge transport for switching or a combination of the two. Although one will have to tolerate higher conduction loss, the reduced switching loss will result in minimum total power loss.
Bipolar semiconductor devices benefit from the phenomenon that current conduction does not occur through only electrons or only holes, but through simultaneous charge transport by both electrons and holes. Additionally, the electron and hole concentrations in the drift layer increase with increasing current density, which results in a better conducting drift layer when the current density in the drift layer increases: conductivity modulation. This phenomenon makes a bipolar structure (e.g. pn-junction diode or BJT) in principle a better conducting device per unit area than a unipolar device like a MOSFET or a Schottky diode. In other words: with the same amount of silicon you can make a better conducting switch by using bipolar technology.
Unfortunately the conductivity modulation phenomenon does also have a disadvantage. When a bipolar device is conducting (that is: in the on-state), a high concentration of charge carriers (electrons and holes) have been injected into the drift layer of the device (that’s what makes the device conduct so well). But, when at a certain moment the bipolar device needs to switch to the off-state, the excess charge carriers in the drift layer must first be removed in one way or another. For that reason a bipolar cannot switch-off instantaneously. Either one will have to wait until the excess electrons and holes In the drift layer have recombined spontaneously or one will have to extract the excess charge carriers actively. For that reason, in a bipolar diode, that is forced to switch off quickly, a so-called reverse recovery current flows (which extracts the stored charge carriers from the drift layer) before the bipolar diode actually blocks the flow of current.
The amount of charge that can be injected and stored in the drift layer depends on the lifetime of the charge carriers. In ‘normal’ silicon the effective lifetime of charge carriers is in the order of several microseconds to 100’s of microseconds (that would be in ultra-pure silicon without lattice defects). Standard mains rectifier diodes are fabricated using this ‘normal’ silicon; the several microseconds charge carrier lifetime are no limitation for timely spontaneous recombination or charge extraction in a system that switches at 50 or 60 Hz only.
In bipolar diodes that need to be able to switch at much higher switching frequencies, lifetime control is used to reduce the effective lifetime of the charge carriers. Reduced charge carrier lifetime reduces the concentration of charge carriers that can be injected into the drift layer of the diode. That makes the diode less well conducting in the on-state (higher VF at the same current density), but also makes it easier to extract the excess stored charge (simply because it’s less charge) and it speeds-up the spontaneous electron-hole recombination process: the diode switches faster. This also explains why faster diodes (with lower trr) have higher VF.
Lifetime control in bipolar devices is a matter of artificially introducing ‘energy levels’ between the valence band and the conduction band in the silicon lattice. This can be done in various ways, but two commonly used methods are so-called “Gold-kill” and “Platinum-kill”. Relatively low concentrations of gold (Au) or platinum (Pt) are diffused into the silicon lattice. The Au or Pt atoms introduce an energy level between the conduction and valence band that functions as a ‘stepping stone’ for excess electrons (in the conduction band) and holes (in the valence band) to recombine. The more of these so-called recombination centers are present in the silicon lattice, the lower the effective lifetime of charge carriers will be.
Reduced charge carrier lifetime increases the VF of a diode because electrons and holes do not only recombine when a diode is required to switch off but also when the diode conducts. A second consequence of the introduction of (Au or Pt) recombination centers is that these centers also operate as so-called generation centers when the diode is in the off-state. This causes a fast Au-killed or Pt-killed diode to have a (much) higher leakage current than a slow “un-killed” diode. At higher operating temperature the leakage current can even cause a substantial part of the total power loss in the diode and may even cause thermal runaway in the end.
However, there is a substantial difference between Au-killed and Pt-killed diodes. Gold is the most effective recombination center, but it is also the ‘best’ generation center. At relatively low temperature the leakage current in an Au-killed diode can already be uncomfortably high. Platinum is not as effective as Gold as a recombination center, but it’s also a less effective generation center. Therefore Pt-killed diodes exhibit lower leakage current at the same operating temperature. Where most Au-killed diodes cannot be used at an operating temperature above 150°C, Pt-killed diodes are commonly able to operate at a temperature of 175°C or higher and are therefore a more reliable solution. | 2017-12-17 07:53:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6674784421920776, "perplexity": 1570.3316016244835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948594665.87/warc/CC-MAIN-20171217074303-20171217100303-00003.warc.gz"} |
https://www.numerade.com/questions/high-levels-of-ozone-leftmathrmo_3right-cause-rubber-to-deteriorate-green-plants-to-turn-brown-and-m/ | $\mathrm{ABaSO}_{4}$ slurry is ingested before th…
View
QR
University of Florida
This question is in the process of being solved. The video shown is an answer to a question that covers similar topics.
Problem 83
High levels of ozone $\left(\mathrm{O}_{3}\right)$ cause rubber to deteriorate, green plants to turn brown, and many people to have difficulty breathing. (a) Is the formation of $\mathrm{O}_{3}$ from $\mathrm{O}_{2}$ favored at all $T,$ no $T,$ high $T$ or low $T ?$ (b) Calculate $\Delta G^{\circ}$ for this reaction at 298 $\mathrm{K}$ .
(c) Calculate $\Delta G$ at 298 $\mathrm{K}$ for this reaction in urban smog where
Check back soon!
Chapter 20
Thermodynamics: Entropy, Free Energy, and the Direction of Chemical Reactions
CHEMISTRY: The Molecular Nature of Matter and Change 2016
Discussion
You must be signed in to discuss.
Video Transcript
No transcript available | 2020-01-28 12:41:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37469324469566345, "perplexity": 2652.3807314054616}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778272.69/warc/CC-MAIN-20200128122813-20200128152813-00011.warc.gz"} |
https://nbviewer.ipython.org/github/BMClab/BMC/blob/master/notebooks/SVDalgorithm.ipynb | # Determining rigid body transformation using the SVD algorithm¶
Marcos Duarte
Laboratory of Biomechanics and Motor Control (http://demotu.org/)
Federal University of ABC, Brazil
Ideally, three non-collinear markers placed on a moving rigid body is everything we need to describe its movement (translation and rotation) in relation to a fixed coordinate system. However, in practical situations of human motion analysis, markers are placed on the soft tissue of a deformable body and this generates artifacts caused by muscle contraction, skin deformation, marker wobbling, etc. In this situation, the use of only three markers can produce unreliable results. It has been shown that four or more markers on the segment followed by a mathematical procedure to calculate the 'best' rigid-body transformation taking into account all these markers produces more robust results (Söderkvist & Wedin 1993; Challis 1995; Cappozzo et al. 1997).
One mathematical procedure to calculate the transformation with three or more marker positions envolves the use of the singular value decomposition (SVD) algorithm from linear algebra. The SVD algorithm decomposes a matrix $\mathbf{M}$ (which represents a general transformation between two coordinate systems) into three simple transformations: a rotation $\mathbf{V^T}$, a scaling factor $\mathbf{S}$ along the rotated axes and a second rotation $\mathbf{U}$:
$$\mathbf{M}= \mathbf{U\;S\;V^T}$$
And the rotation matrix is given by:
$$\mathbf{R}= \mathbf{U\:V^T}$$
The matrices $\mathbf{U}$ and $\mathbf{V}$ are both orthonormal (det = $\pm$1).
For example, if we have registered the position of four markers placed on a moving segment in 100 different instants and the position of these same markers during, what is known in Biomechanics, a static calibration trial, we would use the SVD algorithm to calculate the 100 rotation matrices (between the static trials and the 100 instants) in order to find the Cardan angles for each instant.
The function svdt.py (its code is shown at the end of this text) determines the rotation matrix ($R$) and the translation vector ($L$) for a rigid body after the following transformation: $B = R*A + L + err$. Where $A$ and $B$ represent the rigid body in different instants and err is an aleatory noise. $A$ and $B$ are matrices with the marker coordinates at different instants (at least three non-collinear markers are necessary to determine the 3D transformation).
The matrix $A$ can be thought to represent a local coordinate system (but $A$ it's not a basis) and matrix $B$ the global coordinate system. The operation $P_g = R*P_l + L$ calculates the coordinates of the point $P_l$ (expressed in the local coordinate system) in the global coordinate system ($P_g$).
Let's test the svdt function:
In [2]:
# Import the necessary libraries
import numpy as np
import sys
sys.path.insert(1, r'./../functions')
In [3]:
from svdt import svdt
# markers in different columns (default):
A = np.array([0,0,0, 1,0,0, 0,1,0, 1,1,0]) # four markers
B = np.array([0,0,0, 0,1,0, -1,0,0, -1,1,0]) # four markers
R, L, RMSE = svdt(A, B)
print('Rotation matrix:\n', np.around(R, 4))
print('Translation vector:\n', np.around(L, 4))
print('RMSE:\n', np.around(RMSE, 4))
# markers in different rows:
A = np.array([[0,0,0], [1,0,0], [ 0,1,0], [ 1,1,0]]) # four markers
B = np.array([[0,0,0], [0,1,0], [-1,0,0], [-1,1,0]]) # four markers
R, L, RMSE = svdt(A, B, order='row')
print('Rotation matrix:\n', np.around(R, 4))
print('Translation vector:\n', np.around(L, 4))
print('RMSE:\n', np.around(RMSE, 4))
Rotation matrix:
[[ 0. -1. 0.]
[ 1. 0. 0.]
[ 0. 0. 1.]]
Translation vector:
[0. 0. 0.]
RMSE:
0.0
Rotation matrix:
[[ 0. -1. 0.]
[ 1. 0. 0.]
[ 0. 0. 1.]]
Translation vector:
[0. 0. 0.]
RMSE:
0.0
For the matrix of a pure rotation around the z axis, the element in the first row and second column is $-sin\gamma$, which means the rotation was $90^o$, as expected.
A typical use of the svdt function is to calculate the transformation between $A$ and $B$ ($B = R*A + L$), where $A$ is the matrix with the markers data in one instant (the calibration or static trial) and $B$ is the matrix with the markers data of more than one instant (the dynamic trial).
Input $A$ as a 1D array [x1, y1, z1, ..., xn, yn, zn] where n is the number of markers and $B$ a 2D array with the different instants as rows (like in $A$).
The output $R$ has the shape (3, 3, tn), where tn is the number of instants, $L$ the shape (tn, 3), and $RMSE$ the shape (tn). If tn is equal to one, the outputs have the same shape as in svdt (the last dimension of the outputs above is dropped).
Let's show this case:
In [4]:
A = np.array([1,0,0, 0,1,0, 0,0,1])
B = np.array([0,1,0, -1,0,0, 0,0,1])
B = np.vstack((B, B)) # simulate two instants (two rows)
R, L, RMSE = svdt(A, B)
print('Rotation matrix:\n', np.around(R, 4))
print('Translation vector:\n', np.around(L, 4))
print('RMSE:\n', np.around(RMSE, 4))
Rotation matrix:
[[[-0. -1. -0.]
[ 1. -0. -0.]
[-0. -0. 1.]]
[[-0. -1. -0.]
[ 1. -0. -0.]
[-0. -0. 1.]]]
Translation vector:
[[0. 0. 0.]
[0. 0. 0.]]
RMSE:
[0. 0.]
## References¶
### Function svdt.py¶
In [ ]:
# %load ./../functions/svdt.py
#!/usr/bin/env python
"""Calculates the transformation between two coordinate systems using SVD."""
__author__ = "Marcos Duarte, https://github.com/demotu/BMC"
__version__ = "1.0.1"
import numpy as np
def svdt(A, B, order='col'):
"""Calculates the transformation between two coordinate systems using SVD.
This function determines the rotation matrix (R) and the translation vector
(L) for a rigid body after the following transformation [1]_, [2]_:
B = R*A + L + err.
Where A and B represents the rigid body in different instants and err is an
aleatory noise (which should be zero for a perfect rigid body). A and B are
matrices with the marker coordinates at different instants (at least three
non-collinear markers are necessary to determine the 3D transformation).
The matrix A can be thought to represent a local coordinate system (but A
it's not a basis) and matrix B the global coordinate system. The operation
Pg = R*Pl + L calculates the coordinates of the point Pl (expressed in the
local coordinate system) in the global coordinate system (Pg).
A typical use of the svdt function is to calculate the transformation
between A and B (B = R*A + L), where A is the matrix with the markers data
in one instant (the calibration or static trial) and B is the matrix with
the markers data for one or more instants (the dynamic trial).
If the parameter order='row', the A and B parameters should have the shape
(n, 3), i.e., n rows and 3 columns, where n is the number of markers.
If order='col', A can be a 1D array with the shape (n*3, like
[x1, y1, z1, ..., xn, yn, zn] and B a 1D array with the same structure of A
or a 2D array with the shape (ni, n*3) where ni is the number of instants.
The output R has the shape (ni, 3, 3), L has the shape (ni, 3), and RMSE
has the shape (ni,). If ni is equal to one, the outputs will have the
singleton dimension dropped.
Part of this code is based on the programs written by Alberto Leardini,
Christoph Reinschmidt, and Ton van den Bogert.
Parameters
----------
A : Numpy array
Coordinates [x,y,z] of at least three markers with two possible shapes:
order='row': 2D array (n, 3), where n is the number of markers.
order='col': 1D array (3*nmarkers,) like [x1, y1, z1, ..., xn, yn, zn].
B : 2D Numpy array
Coordinates [x,y,z] of at least three markers with two possible shapes:
order='row': 2D array (n, 3), where n is the number of markers.
order='col': 2D array (ni, n*3), where ni is the number of instants.
If ni=1, B is a 1D array like A.
order : string
'col': specifies that A and B are column oriented (default).
'row': specifies that A and B are row oriented.
Returns
-------
R : Numpy array
Rotation matrix between A and B with two possible shapes:
order='row': (3, 3).
order='col': (ni, 3, 3), where ni is the number of instants.
If ni=1, R will have the singleton dimension dropped.
L : Numpy array
Translation vector between A and B with two possible shapes:
order='row': (3,) if order = 'row'.
order='col': (ni, 3), where ni is the number of instants.
If ni=1, L will have the singleton dimension dropped.
RMSE : array
Root-mean-squared error for the rigid body model: B = R*A + L + err
with two possible shapes:
order='row': (1,).
order='col': (ni,), where ni is the number of instants.
--------
numpy.linalg.svd
Notes
-----
The singular value decomposition (SVD) algorithm decomposes a matrix M
(which represents a general transformation between two coordinate systems)
into three simple transformations [3]_: a rotation Vt, a scaling factor S
along the rotated axes and a second rotation U: M = U*S*Vt.
The rotation matrix is given by: R = U*Vt.
References
----------
.. [1] Soderkvist, Kedin (1993) Journal of Biomechanics, 26, 1473-1477.
.. [2] http://www.kwon3d.com/theory/jkinem/rotmat.html.
.. [3] http://en.wikipedia.org/wiki/Singular_value_decomposition.
Examples
--------
>>> import numpy as np
>>> from svdt import svdt
>>> A = np.array([0,0,0, 1,0,0, 0,1,0, 1,1,0]) # four markers
>>> B = np.array([0,0,0, 0,1,0, -1,0,0, -1,1,0]) # four markers
>>> R, L, RMSE = svdt(A, B)
>>> B = np.vstack((B, B)) # simulate two instants (two rows)
>>> R, L, RMSE = svdt(A, B)
>>> A = np.array([[0,0,0], [1,0,0], [ 0,1,0], [ 1,1,0]]) # four markers
>>> B = np.array([[0,0,0], [0,1,0], [-1,0,0], [-1,1,0]]) # four markers
>>> R, L, RMSE = svdt(A, B, order='row')
"""
A, B = np.asarray(A), np.asarray(B)
if order == 'row' or B.ndim == 1:
if B.ndim == 1:
A = A.reshape(int(A.size/3), 3)
B = B.reshape(int(B.size/3), 3)
R, L, RMSE = svd(A, B)
else:
A = A.reshape(int(A.size/3), 3)
ni = B.shape[0]
R = np.empty((ni, 3, 3))
L = np.empty((ni, 3))
RMSE = np.empty(ni)
for i in range(ni):
R[i, :, :], L[i, :], RMSE[i] = svd(A, B[i, :].reshape(A.shape))
return R, L, RMSE
def svd(A, B):
"""Calculates the transformation between two coordinate systems using SVD.
See the help of the svdt function.
Parameters
----------
A : 2D Numpy array (n, 3), where n is the number of markers.
Coordinates [x,y,z] of at least three markers
B : 2D Numpy array (n, 3), where n is the number of markers.
Coordinates [x,y,z] of at least three markers
Returns
-------
R : 2D Numpy array (3, 3)
Rotation matrix between A and B
L : 1D Numpy array (3,)
Translation vector between A and B
RMSE : float
Root-mean-squared error for the rigid body model: B = R*A + L + err.
--------
numpy.linalg.svd
"""
Am = np.mean(A, axis=0) # centroid of m1
Bm = np.mean(B, axis=0) # centroid of m2
M = np.dot((B - Bm).T, (A - Am)) # considering only rotation
# singular value decomposition
U, S, Vt = np.linalg.svd(M)
# rotation matrix
R = np.dot(U, np.dot(np.diag([1, 1, np.linalg.det(np.dot(U, Vt))]), Vt))
# translation vector
L = B.mean(0) - np.dot(R, A.mean(0))
# RMSE
err = 0
for i in range(A.shape[0]):
Bp = np.dot(R, A[i, :]) + L
err += np.sum((Bp - B[i, :])**2)
RMSE = np.sqrt(err/A.shape[0]/3)
return R, L, RMSE
In [ ]: | 2019-10-18 22:01:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5533774495124817, "perplexity": 4482.437298568893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684854.67/warc/CC-MAIN-20191018204336-20191018231836-00441.warc.gz"} |
http://mathhelpforum.com/algebra/93588-modulus-inequalities.html | 1. ## Modulus and inequalities
Solve this,
(|x|+1)/(|x|-1)<4
Is there anything wrong with this step,
|x|+1 < 4|x|-4 ??
Then, -3|x|<-5
|x|>5/3
x>5/3, x<-5/3
OR
when x<0
(-x+1)/(-x-1)<4
x<-5/3
when x>=0
x>5/3
But there is another solution of -1<x<1. How to get this out??
Thanks.
2. Question
$\frac{|x|+1}{|x|-1}<4$
Instead of multiplying by $|x|-1$, multiply by $(|x|-1)^2$ and you should obtain that missing solution when solving the equation.
This works because when multiplying by $|x|-1$, you reduce the equation to a linear equation, eliminating possible solutions. | 2017-06-26 07:48:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9044444561004639, "perplexity": 1789.3450698804488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320685.63/warc/CC-MAIN-20170626064746-20170626084746-00184.warc.gz"} |
https://puzzling.stackexchange.com/questions/91897/worst-way-to-solve-rubiks-in-one-algorithm/91945 | # Worst way to solve Rubik's in one algorithm
If you apply the same algorithm over and over again the cube will be solved.
I want to know what algorithm is the worst one (the one with the longest cycle)
I have found RU (cycle of 60 times)
I have found one of 72 but I can't remember it
• Cycling RU solves the cube in 60 times? I didn’t know that was possible – Gabe Dec 12 '19 at 16:18
It seems like you are trying to find
An element of the Rubik's cube group whose order is as large as possible.
One such example that satisfies this is
$$(RU^2D^{-1}BD^{-1})$$ which has the maximum attainable order of $$1260$$
In particular,
Starting from a solved cube, you need to apply this algorithm $$1260$$ times to get back to the beginning.
As mentioned by armb in the comments, there is a good answer here discussing the maximum orders for an $$n \times n \times n$$ Rubiks cube.
• Can you prove that? – pgp1 Dec 11 '19 at 19:46
• @pgp1 you can do it yourself - simply apply the algorithm 1260 times! :P – Quintec Dec 12 '19 at 1:14
• "apply the algorithm 1260 times" proves it has order 1260, it doesn't prove that that is the maximum possible order. For the latter, see math.stackexchange.com/questions/2392906/… – armb Dec 12 '19 at 11:06
• @armb This is very useful, I may add this link to the answer. – hexomino Dec 12 '19 at 11:08
There are a ton of ways to get the cube from solved state to solved state. One way is (R B D)x ~60 which I find quite interesting. Since the cube is very symmetric, there are a lot of these cases. | 2020-02-18 03:37:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6632740497589111, "perplexity": 619.4404167603321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143505.60/warc/CC-MAIN-20200218025323-20200218055323-00101.warc.gz"} |
https://brilliant.org/problems/more-than-enough-info/ | # More than enough info!
Geometry Level 2
Area of a triangle with side lengths $a,b,c$ is $\dfrac{9}{4} \sqrt{3}$.
Also $a^2 + b^2 + c^2 = 27$.
Find value of $\dfrac{abc}{a+b+c}$
× | 2020-05-31 20:37:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7289922833442688, "perplexity": 2582.041820377944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413624.48/warc/CC-MAIN-20200531182830-20200531212830-00361.warc.gz"} |
https://planetmath.org/HerbrandsTheorem | # Herbrand’s theorem
Let $\mathbb{Q}(\zeta_{p})$ be a cyclotomic extension of $\mathbb{Q}$, with $p$ an odd prime, let $A$ be the Sylow $p$-subgroup of the ideal class group of $\mathbb{Q}(\zeta_{p})$, and let $G$ be the Galois group of this extension. Note that the character group of $G$, denoted $\hat{G}$, is given by
$\displaystyle\hat{G}=\{\chi^{i}\mid 0\leq i\leq p-2\}$
For each $\chi\in\hat{G}$, let $\varepsilon_{\chi}$ denote the corresponding orthogonal idempotent of the group ring, and note that the $p$-Sylow subgroup of the ideal class group is a $\mathbb{Z}[G]$-module under the typical multiplication. Thus, using the orthogonal idempotents, we can decompose the module $A$ via $A=\sum_{i=0}^{p-2}A_{\omega^{i}}\equiv\sum_{i=0}^{p-2}A_{i}$.
Last, let $B_{k}$ denote the $k$th Bernoulli number.
###### Theorem 1 (Herbrand).
Let $i$ be odd with $3\leq i\leq p-2$. Then $A_{i}\neq 0\iff p\mid B_{p-i}$.
Only the first direction of this theorem ($\implies$) was proved by Herbrand himself. The converse is much more intricate, and was proved by Ken Ribet.
Title Herbrand’s theorem HerbrandsTheorem 2013-03-22 14:12:45 2013-03-22 14:12:45 mathcam (2727) mathcam (2727) 5 mathcam (2727) Theorem msc 11R29 | 2019-03-18 21:23:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 22, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.983493983745575, "perplexity": 264.36472945858236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201707.53/warc/CC-MAIN-20190318211849-20190318233849-00548.warc.gz"} |
https://www.science20.com/quantum_diaries_survivor?page=209 | How To Choose Your Restaurant Or Hotel
So you're planning ahead for your next trip to a remote location, and you try to make sense of...
The ATLAS Quest For Photon Jets
What is a photon jet? Despite their exotic name, photon jets are a well studied thing nowadays...
ScienceGround At Festivaletteratura
Since yesterday, and for almost a week, the literature festival in Mantova hosts "ScienceGround"...
Something Does Not Match. Error Or Discovery?
On September 4 to 8 the city of Mantova, in northern Italy, will be brimming with writers and readers...
Tommaso Dorigo Tommaso Dorigo is an experimental particle physicist, who works for the INFN at the University of Padova, and collaborates with the CMS experiment at the CERN LHC. He coordinates the European network... Read More » Blogroll
# Citizen Randall
May 27 2009 | comment(s)
This afternoon Lisa Randall, one of the most famous theoretical physicists of our time, received from the hands of Flavio Zanonato, mayor of Padova, the keys of the city.
# Live Now: Supernova Hunt With the Virtual Telescope!
May 25 2009 | comment(s)
http://www.coelumstream.com/ is broadcasting live images of galaxies, to be compared with reference images in search for supernovaes. A commentary is provided in Italian and English. Join NOW!
Below is a screenshot of what is being shown now.
# Planck 2009
May 25 2009 | comment(s)
This morning the Planck 09 conference started at the Auditorium Altinate (see picture, right) in Padova. For a week, theorists and experimentalists will discuss hot topics in a variety of fields, from particle physics to cosmology, to string theory. A PDF file with the program is online.
$H \to WW \to l \nu l \nu$
# Tevatron Higgs Limits Strengthened By A New Theoretical Study
May 24 2009 | comment(s)
A new paper in the Arxiv attracted my attention this morning. It is titled "Perturbative QCD effects and the search for a $H \to WW \to l \nu l \nu$ signal at the Tevatron", and is authored by a set of quite distinguished theorists: C.Anastasiou, G.Dissertori, M.Grazzini, F.Stockli, and B.Webber.
# CDF Vs. DZERO: And The Winner Is...
May 21 2009 | comment(s)
Last Tuesday CDF announced their own discovery of the Omega_b baryon, a measurement which creates a controversy with the competing experiment at the Tevatron collider, DZERO. That is because DZERO had already claimed discovery for that particle, almost one year ago, and because the two measurements disagree wildly with each other. Just browse through my past few posts in this column and you will find all the information you need (how lazy can one be with links?).
# The Say of the Week
May 21 2009 | comment(s)
"I seldom enjoy competition in real life. I find it fun only when I am up against somebody who takes things more seriously than I do."
QDS | 2018-09-19 22:22:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18488797545433044, "perplexity": 4424.585691823531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156311.20/warc/CC-MAIN-20180919220117-20180920000117-00083.warc.gz"} |
https://chemistry.stackexchange.com/questions/33297/why-do-3d-orbitals-have-lesser-energy-than-4s-orbitals-in-transition-metals/33310 | # Why do 3d orbitals have lesser energy than 4s orbitals in transition metals? [duplicate]
This is quoted from Jim Clark's Chemguide
For reasons which are too complicated to go into at this level, once you get to scandium, the energy of the 3d orbitals becomes slightly less than that of the 4s, and that remains true across the rest of the transition series.
What is the reason stated above? Why does in only transition series, 4s has more energy than 3d & not in the preceeding elements?
## marked as duplicate by jerepierre, Ben Norris, ron, Klaus-Dieter Warzecha, Loong♦Jun 25 '15 at 9:02
• There is evidence that the title statement is not true to begin with. See e.g. the links in the disclaimer in my (accepted) answer. – orthocresol Jan 6 at 16:50
# Disclaimer: I now believe this answer to be fully incorrect.
### Please consider un-upvoting it and/or downvoting it. I do not like seeing incorrect answers at +22.
However, I will leave it up for now. It is a reflection of what is taught in many undergraduate-level textbooks or courses. However, there have been criticisms of this particular graph in Shriver & Atkins, as well as of the idea that the 3d orbitals are somehow higher in energy than the 4s orbitals. I believe it was mentioned that the energies were calculated with the outdated Thomas–Fermi–Dirac model, but cannot really remember. I will ask another question about the 3d vs 4s issue, but in the meantime I would point the reader in the direction of these articles:
1. Pilar, F. L. 4s is always above 3d! Or, how to tell the orbitals from the wavefunctions. J. Chem. Educ. 1978, 55 (1), 2 DOI: 10.1021/ed055p2.
2. Melrose, M. P.; Scerri, E. R. Why the 4s Orbital Is Occupied before the 3d. J. Chem. Educ. 1996, 73 (6), 498 DOI: 10.1021/ed073p498.
3. Vanquickenborne, L. G.; Pierloot, K.; Devoghel, D. Transition Metals and the Aufbau Principle. J. Chem. Educ. 1994, 71 (6), 469 DOI: 10.1021/ed071p469.
4. Scerri, E. R. Transition metal configurations and limitations of the orbital approximation. J. Chem. Educ. 1989, 66 (6), 481 DOI: 10.1021/ed066p481.
5. Some criticism of Atkins' books by Eric Scerri.
While Molly's answer does a good job of explaining why electrons preferentially occupy the 4s subshell over the 3d subshell (due to less inter-electron repulsion), it doesn't directly answer the question of why the order of the 3d/4s energies changes going from Ca to Sc. I stole this figure from Shriver & Atkins 5th ed:
The red line represents the energy of the 3d orbital, and the blue line the energy of the 4s orbital. You can see that up to Ca, 3d > 4s but for Sc onwards, 4s < 3d.
As chemguide rightly points out, up to Ca, the 4s orbital is lower in energy than the 3d. The energy of an electron in an orbital is given by $$E = -hcR\left(\frac{Z_\text{eff}}{n}\right)^2$$ where $$hcR$$ is a collection of constants, $$Z_\text{eff}$$ is the effective nuclear charge experienced by the electron, and $$n$$ is the principal quantum number. Since $$n = 4$$ for the 4s orbital and $$n = 3$$ for the 3d orbital, one would initially expect the 3d orbital to be lower in energy (a more negative energy). However, the 4s orbital is more penetrating than the 3d orbital; this can be seen by comparing the radial distribution functions of the two orbitals, defined as $$R(r)^2 r^2$$ where $$R(r)$$ is the radial wavefunction obtained from the Schrodinger equation:
The 4s orbital has a small inner radial lobe (the blue bump at the left-hand side of the graph), which means that a 4s electron "tends to spend time" near the nucleus, causing it to experience the full nuclear charge to a greater extent. We say that the 4s electron penetrates the core electrons (i.e. 1s through 3p subshells) better. It is therefore shielded less than a 3d electron, which makes $$Z_\text{eff}$$ larger. Going from the 3d to the 4s orbital, the increase in $$Z_\text{eff}$$ wins ever so slightly over the increase in $$n$$, which makes the energy of the 4s orbital lower.
Now, going from Ca to Sc means that you are adding one more proton to the nucleus. This makes the nuclear charge larger and therefore both the 4s and 3d orbitals are stabilised (their energies decrease). The catch is that the energy of the 4s orbital decreases more slowly than that of the 3d orbital, because the 4s orbital is relatively radially diffuse (the maximum in the radial distribution function occurs at a larger value of $$r$$). If you have studied physics, you could think of it as the interaction between two point charges; if the distance between them is large, then increasing the magnitude of one point charge has a smaller effect on the potential energy $$U = -\frac{kq_1q_2}{r}$$. The faster decrease of the 3d energy also makes sense because if nuclear charge were to tend to infinity, shielding would become negligible; the orbital energies would then be entirely determined by $$n$$, and if this were to be the case, you would expect 3d < 4s in terms of energies, as we said at the very start.
However, in Sc, the electrons preferentially occupy the 4s subshell even though it is higher in energy, and this is also because the 4s orbital is radially diffuse - the electrons have more "personal space" and experience less repulsion. One way of putting it is that an empty 4s orbital in Sc has a higher energy than an empty 3d orbital, but a filled 4s orbital has a lower energy than a filled 3d orbital. The fact that 4s > 3d in energy also explains why, for the transition metals, the 4s electrons are removed first upon ionisation ($$\ce{Sc^+}: [\ce{Ar}](3\mathrm{d})^1(4\mathrm{s})^1$$.)
I just want to end off with a comment that the factors that determine the electronic configurations of d-block and f-block elements are actually very closely balanced and just a small change in one factor can lead to a completely different electronic configuration. This is why Cr and Cu have an "anomalous" configuration that maximises exchange energy, whereas we don't get carbon adopting a $$(1\mathrm{s})^2(2\mathrm{s})^1(2\mathrm{p})^3$$ configuration in order to have "stable half-filled shells".
• In the quest of re-reading old posts, I got somewhat baffled at one point here: you said up to $\ce{Ca},$ the $E$ for an electron in $\rm{4s}$ is lower than that of $\rm{3d}$ as the increase of $\rm{Z_{eff}}$ gets somewhat nullified by the increase in $n$ in the denominator. Okay. But I'm not getting why the same thing doesn't happen in elements after $\ce{Ca}$ eg. $\ce{Sc};$ $\rm{ 4s}$ is radially diffused even in $\ce{Ca}$ but does this affect the energy of the electron? Sorry, if I'm bothering you @Ortho, but would appreciate if you tell me why can't $\rm 4s\lt 3d$ in $\ce{Sc}$ [contd.] – user5764 Dec 3 '16 at 13:41
• for the same reason as in $\ce{Ca}$ viz. the increase in $n$ in the denominator is nullified by the increase in $\rm{Z_{eff}}.$ Also, there maybe a possible typo here: 3d > 4s but for Sc onwards, 4s < 3d. Thanks. – user5764 Dec 3 '16 at 13:41
• @MAFIA36790 Sorry I didn't get back to you earlier, I was travelling on that day and forgot all about it. To be honest, after a couple more years of chemistry, I am not entirely convinced how accurate a description this is. There have been criticisms of this particular graph in Shriver & Atkins, which I read before (various authors have written on it before), but I don't have the time to do thorough research into the matter right now. I will point you in the direction of these: pubs.acs.org/doi/abs/10.1021/ed055p2 and chem.ucla.edu/dept/Faculty/scerri/pdf/Atkins_critique.pdf – orthocresol Dec 8 '16 at 9:45
• The idea in my post was that as atomic number increases, $Z_\mathrm{eff}$ of both the 3d and 4s orbitals increase. However, the 3d orbital is more greatly affected, i.e. $Z_\mathrm{eff}(\mathrm{3d})$ increases faster than $Z_\mathrm{eff}(\mathrm{4s})$. Consequently, there will be a crossover point where $$\frac{Z_\mathrm{eff}(\mathrm{3d})}{3} = \frac{Z_\mathrm{eff}(\mathrm{4s})}{4},$$ i.e. 3d and 4s have equal energies. Before this point, 4s < 3d, and after this point, 3d < 4s. Atkins' argument is that this point lies exactly between Ca and Sc. – orthocresol Dec 8 '16 at 9:53
• Why do we look at the presence of radial nodes near the nucleus when measuring the amount of penetration of the electrons in a particular orbital. Shouldn't we look at the mean distance or most probable distance of the electron from the nucleus? – Tan Yong Boon Apr 11 '18 at 23:07
This is a difficult question to answer. Following the Aufbau Principle and the n+l rule, the 4s orbital should fill before the 3d orbital. So why is 3d lower in energy? In short, the Aufbau Principle is not entirely correct. It is a guideline (like many things in chemistry.)
So, orbitals fill in order of stability. That is to say, electrons will go where they will be the most stable. It takes energy to hold electrons around the nucleus. The farther away they are, the more energy is needed to keep them. So the higher the principle quantum number, the higher the energy. I.e. 3s is higher in energy than 2s. At the same time, the principle quantum number is not the only number than needs to be considered. The quantum number l, for example, is also important. The higher value of l, the higher the energy. So 3d is higher in energy than 3p which is higher in energy than 3s. The 3d orbitals are more compactly placed around the nucleus than the 4s orbitals, so they fill first, even though this contradicts the Aufbau principle. This can be seen experimentally with the electron configurations for scandium: Sc3+: [Ar] Sc2+: [Ar]3d(1) Sc+: [Ar]3d(1)4s(1) Sc: [Ar]3d(1)4s(2)
Now, it is important to note that the 4s level does fill before 3d is entirely full. This is due to the 3d orbital's compactness. Electron repulsion "pushes" electrons into higher energy levels with less repulsion.
I would recommend reading this as it explains this in much more detail: http://www.rsc.org/eic/2013/11/aufbau-electron-configuration
I hope that helped! | 2019-11-18 07:21:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6350176930427551, "perplexity": 641.0337978652735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669454.33/warc/CC-MAIN-20191118053441-20191118081441-00045.warc.gz"} |
https://forum.bestpractical.com/t/list-of-emails-that-get-sent-correspondence/2476 | # List of emails that get sent correspondence
When an RT user replies/comments on a ticket, after she/he submits the
email and it is sent, RT tells the person ‘Correspondence sent’. I
would like to change this message to 'Correspondence sent to '. I’ve been poking around the RT code a bit and am having
trouble finding where I can get a listing of which emails were actually
sent the message instead of just the Cc’s, Bcc’s, and To addresses. Does
anyone have any idea where I can get a list of the emails that were sent
the message?
Thanks,
Tammy Dugan
When an RT user replies/comments on a ticket, after she/he submits the
email and it is sent, RT tells the person ‘Correspondence sent’. I
would like to change this message to 'Correspondence sent to '. I’ve been poking around the RT code a bit and am having
trouble finding where I can get a listing of which emails were actually
sent the message instead of just the Cc’s, Bcc’s, and To addresses.
Does
anyone have any idea where I can get a list of the emails that were
sent
the message?
This will be much easier in RT 3.2
PGP.sig (186 Bytes) | 2023-02-02 16:53:29 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8277380466461182, "perplexity": 3319.0465449237468}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500035.14/warc/CC-MAIN-20230202165041-20230202195041-00482.warc.gz"} |
http://www.gabormelli.com/RKB/Kernel-based_Learning_Algorithm | # Kernel-based Learning Algorithm
Jump to: navigation, search
A Kernel-based Learning Algorithm is a supervised learning algorithm that uses a kernel function (that maps into a high-dimension space and whose instance similarity score in the original space has low computational complexity - typically through inner product operations). | 2018-06-21 04:36:21 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8776611685752869, "perplexity": 1337.1517319523598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864022.18/warc/CC-MAIN-20180621040124-20180621060124-00150.warc.gz"} |
https://socratic.org/questions/how-do-you-evaluate-arcsin-1-sqrt-2 | # How do you evaluate arcsin(1/sqrt(2))?
$\arcsin \left(\frac{1}{\sqrt{2}}\right) = \frac{\pi}{4} \left(= {45}^{\circ}\right)$
Assuming $\arcsin \left(x\right)$ is restricted to $\left[0 , \pi\right)$
the above diagram shows the only configuration for $\arcsin \left(\frac{1}{\sqrt{2}}\right)$ | 2020-02-22 20:31:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9233143925666809, "perplexity": 458.06405835240565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145713.39/warc/CC-MAIN-20200222180557-20200222210557-00292.warc.gz"} |
http://www.zentralblatt-math.org/ioport/en/?q=an%3A50254186 | History
Year:
-
Type:
Journal
Book
Article
Please fill in your query. A complete syntax description you will find on the General Help page.
Geometric aspects of frame representations of abelian groups. (English)
Trans. Am. Math. Soc. 356, No. 12, 4767-4786 (2004).
A frame for a separable Hilbert space $(H, \langle \cdot, \cdot \rangle)$ is a countable sequence $X = \{ x_j : j \in J\}$ such that there exist positive constants $C_1, C_2$ such that $C_1 \Vert x\Vert ^2 \leq \sum_{j \in J} \vert \langle x, x_j \rangle\vert ^2 \leq C_2 \Vert x\Vert ^2$ for all $x\in H$. The analysis operator $Θ$ for $X$ is given by $Θ: H \to l^2(J)$, $x\mapsto \{ \langle x, x_j \rangle: j \in J \}$. Let $G$ be a countable abelian group and $π: G \to B(H)$ a unitary representation of $G$ on a Hilbert space $H$. Such a representation is called a frame representation if there is a frame vector $v\in H$ such that $\{ π(g) v : g \in G\}$ is a frame for $H$. To each frame representation a multiplicity function $m : \widehat{G} \to \{ 0, 1, \ldots, \infty \}$ is associated, where $\widehat{G}$ denotes the character group of $G$. Let $π_H$ and $π_K$ be two frame representations of $G$ on $H$ and $K$ with analysis operators $Θ_H$ and $Θ_K$ for the corresponding frame vectors, respectively. Then a characterisation for the ranges of $Θ_H$ and $Θ_K$ to be equal, to be orthogonal, to have non-trivial intersection, or for one being contained in the other is obtained, where the characterisation is in terms of the supports of the corresponding multiplicity functions. Extensions to Bessel sequences arising from the action of the group are also considered. The results are then applied to the sampling of bandlimited functions and to wavelet and Weyl-Heisenberg frames, giving sufficient conditions for the above mentioned properties of the ranges of the analysis operators. The multiplicity functions are obtained explicitly in these cases.
Alexander Lindner (München) | 2013-05-18 12:01:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8015120625495911, "perplexity": 110.2942087923888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382360/warc/CC-MAIN-20130516092622-00053-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://cforall.uwaterloo.ca/trac/changeset/4260566f044d9ca2ac7165b79c878af338be545e/ | # Changeset 4260566
Ignore:
Timestamp:
Mar 24, 2021, 6:31:02 PM (19 months ago)
Branches:
arm-eh, enum, forall-pointer-decay, jacob/cs343-translation, master, new-ast-unique-expr, pthread-emulation, qualifiedEnum
Children:
8d4c9f4
Parents:
4150779
Message:
Andrew MMath: Clean-up of features.tex. Put more general EHM information in the opening and started updating the virtual section.
File:
1 edited
### Legend:
Unmodified
r4150779 This chapter covers the design and user interface of the \CFA exception-handling mechanism. \section{Virtuals} Virtual types and casts are not part of the exception system nor are they required for an exception system. But an object-oriented style hierarchy is a great way of organizing exceptions so a minimal virtual system has been added to \CFA. The pattern of a simple hierarchy was borrowed from object-oriented programming was chosen for several reasons. The first is that it allows new exceptions to be added in user code and in libraries independently of each other. Another is it allows for different levels of exception grouping (all exceptions, all IO exceptions or a particular IO exception). Also it also provides a simple way of passing data back and forth across the throw. Virtual types and casts are not required for a basic exception-system but are useful for advanced exception features. However, \CFA is not object-oriented so there is no obvious concept of virtuals. Hence, to create advanced exception features for this work, I needed to design and implement a virtual-like system for \CFA. % NOTE: Maybe we should but less of the rational here. Object-oriented languages often organized exceptions into a simple hierarchy, \eg Java. exception-handling mechanism (EHM). % or exception system. % We should cover what is an exception handling mechanism and what is an % exception before this. Probably in the introduction. Some of this could % move there. \paragraph{Raise / Handle} An exception operation has two main parts: raise and handle. These are the two parts that the user will write themselves and so might be the only two pieces of the EHM that have any syntax. These terms are sometimes also known as throw and catch but this work uses throw/catch as a particular kind of raise/handle. \subparagraph{Raise} The raise is the starting point for exception handling and usually how Some well known examples include the throw statements of \Cpp and Java and the raise statement from Python. For this overview a raise does nothing more kick off the handling of an exception, which is called raising the exception. This is inexact but close enough for the broad strokes of the overview. \subparagraph{Handle} The purpose of most exception operations is to run some sort of handler that contains user code. The try statement of \Cpp illistrates the common features Handlers have three common features: a region of code they apply to, an exception label that describes what exceptions they handle and code to run when they handle an exception. Each handler can handle exceptions raised in that region that match their exception label. Different EHMs will have different rules to pick a handler if multipe handlers could be used such as best match" or first found". \paragraph{Propagation} After an exception is raised comes what is usually the biggest step for the EHM, finding and setting up the handler. This can be broken up into three different tasks: searching for a handler, matching against the handler and installing the handler. First the EHM must search for possible handlers that could be used to handle the exception. Searching is usually independent of the exception that was thrown and instead depends on the call stack, the current function, its caller and repeating down the stack. Second it much match the exception with each handler to see which one is the best match and hence which one should be used to handle the exception. In languages where the best match is the first match these two are often intertwined, a match check is preformed immediately after the search finds a possible handler. Third, after a handler is chosen it must be made ready to run. What this actually involves can vary widely to fit with the rest of the design of the EHM. The installation step might be trivial or it could be the most expensive step in handling an exception. The latter tends to be the case when stack unwinding is involved. As an alternate third step if no appropriate handler is found then some sort of recovery has to be preformed. This is only required with unchecked exceptions as checked exceptions can promise that a handler is found. It also is also installing a handler but it is a special default that may be installed differently. \subparagraph{Hierarchy} In \CFA the EHM uses a hierarchial system to organise its exceptions. This stratagy is borrowed from object-orientated languages where the exception hierarchy is a natural extension of the object hierarchy. Consider the following hierarchy of exceptions: \begin{center} \setlength{\unitlength}{4000sp}% \end{picture}% \end{center} The hierarchy provides the ability to handle an exception at different degrees of specificity (left to right). Hence, it is possible to catch a more general exception-type in higher-level code where the implementation details are unknown, which reduces tight coupling to the lower-level implementation. Otherwise, low-level code changes require higher-level code changes, \eg, changing from raising @underflow@ to @overflow@ at the low level means changing the matching catch at the high level versus catching the general @arithmetic@ exception. In detail, each virtual type may have a parent and can have any number of children. A type's descendants are its children and its children's descendants. A type may not be its own descendant. The exception hierarchy allows a handler (@catch@ clause) to match multiple exceptions, \eg a base-type handler catches both base and derived exception-types. \begin{cfa} try { ... } catch(arithmetic &) { ... // handle arithmetic, underflow, overflow, zerodivide } \end{cfa} Most exception mechanisms perform a linear search of the handlers and select the first matching handler, so the order of handers is now important because matching is many to one. Each virtual type needs an associated virtual table. A virtual table is a structure with fields for all the virtual members of a type. A virtual type has all the virtual members of its parent and can add more. It may also update the values of the virtual members and often does. A handler labelled with any given exception can handle exceptions of that type or any child type of that exception. The root of the exception hierarchy (here \texttt{exception}) acts as a catch-all, leaf types catch single types and the exceptions in the middle can be used to catch different groups of related exceptions. This system has some notable advantages, such as multiple levels of grouping, the ability for libraries to add new exception types and the isolation between different sub-hierarchies. So the design was adapted for a non-object-orientated language. % Could I cite the rational for the Python IO exception rework? \paragraph{Completion} After the handler has finished the entire exception operation has to complete and continue executing somewhere else. This step is usually very simple both logically and in its implementation as the installation of the handler usually does the heavy lifting. The EHM can return control to many different places. However, the most common is after the handler definition and the next most common is after the raise. \paragraph{Communication} For effective exception handling, additional information is usually required as this base model only communicates the exception's identity. Common additional methods of communication are putting fields on an exception and allowing a handler to access the lexical scope it is defined in (usually a function's local variables). \paragraph{Other Features} Any given exception handling mechanism is free at add other features on top of this. This is an overview of the base that all EHMs use but it is not an exaustive list of everything an EHM can do. \section{Virtuals} Virtual types and casts are not part of the exception system nor are they required for an exception system. But an object-oriented style hierarchy is a great way of organizing exceptions so a minimal virtual system has been added to \CFA. The virtual system supports multiple trees" of types. Each tree is a simple hierarchy with a single root type. Each type in a tree has exactly one parent - except for the root type which has zero parents - and any number of children. Any type that belongs to any of these trees is called a virtual type. % A type's ancestors are its parent and its parent's ancestors. % The root type has no ancestors. % A type's decendents are its children and its children's decendents. Every virtual type also has a list of virtual members. Children inherit their parent's list of virtual members but may add new members to it. It is important to note that these are virtual members, not virtual methods. However as function pointers are allowed they can be used to mimic virtual methods as well. The unique id for the virtual type and all the virtual members are combined into a virtual table type. Each virtual type has a pointer to a virtual table as a hidden field. \todo{Open/Closed types and how that affects the virtual design.} While much of the virtual infrastructure is created, it is currently only used \Cpp syntax for special casts. Both the type of @EXPRESSION@ and @TYPE@ must be a pointer to a virtual type. The cast dynamically checks if the @EXPRESSION@ type is the same or a subtype The cast dynamically checks if the @EXPRESSION@ type is the same or a sub-type of @TYPE@, and if true, returns a pointer to the @EXPRESSION@ object, otherwise it returns @0p@ (null pointer). The function @get_exception_vtable@ is actually a constant function. Recardless of the value passed in (including the null pointer) it should Regardless of the value passed in (including the null pointer) it should return a reference to the virtual table instance for that type. The reason it is a function instead of a constant is that it make type and their use will be detailed there. However all three of these traits can be trickly to use directly. However all three of these traits can be tricky to use directly. There is a bit of repetition required but the largest issue is that the virtual table type is mangled and not in a user list will be passed to both types. In the current set-up the base name and the polymorphic arguments have to match so these macros can be used without losing flexability. match so these macros can be used without losing flexibility. For example consider a function that is polymorphic over types that have a It is dynamic, non-local goto. If a throw is successful then the stack will be unwound and control will (usually) continue in a different function on the call stack. They are commonly used when an error has occured and recovery the call stack. They are commonly used when an error has occurred and recovery is impossible in the current function. \end{cfa} The expression must return a reference to a termination exception, where the termination exception is any type that satifies @is_termination_exception@ termination exception is any type that satisfies @is_termination_exception@ at the call site. Through \CFA's trait system the functions in the traits are passed into the The throw will copy the provided exception into managed memory. It is the user's responcibility to ensure the original exception is cleaned up if the user's responsibility to ensure the original exception is cleaned up if the stack is unwound (allocating it on the stack should be sufficient). } \end{cfa} When viewed on its own a try statement will simply exceute the statements in When viewed on its own a try statement will simply execute the statements in @GUARDED_BLOCK@ and when those are finished the try statement finishes. Exception matching checks the representation of the thrown exception-type is the same or a descendant type of the exception types in the handler clauses. If it is the same of a descendent of @EXCEPTION_TYPE@$_i$ then @NAME@$_i$ is it is the same of a descendant of @EXCEPTION_TYPE@$_i$ then @NAME@$_i$ is bound to a pointer to the exception and the statements in @HANDLER_BLOCK@$_i$ are executed. If control reaches the end of the handler, the exception is closure will be taken from up the stack and executed, after which the throwing function will continue executing. These are most often used when an error occured and if the error is repaired These are most often used when an error occurred and if the error is repaired then the function can continue. \end{cfa} The semantics of the @throwResume@ statement are like the @throw@, but the expression has return a reference a type that satifies the trait expression has return a reference a type that satisfies the trait @is_resumption_exception@. The assertions from this trait are available to the exception system while handling the exception. At runtime, no copies are made. As the stack is not unwound the exception and At run-time, no copies are made. As the stack is not unwound the exception and any values on the stack will remain in scope while the resumption is handled. search and match the handler in the @catchResume@ clause. This will be call and placed on the stack on top of the try-block. The second throw then throws and will seach the same try block and put call another instance of the throws and will search the same try block and put call another instance of the same handler leading to an infinite loop. can form with multiple handlers and different exception types. To prevent all of these cases we mask sections of the stack, or equvilantly the try statements on the stack, so that the resumption seach skips over To prevent all of these cases we mask sections of the stack, or equivalently the try statements on the stack, so that the resumption search skips over them and continues with the next unmasked section of the stack. The symmetry with termination is why this pattern was picked. Other patterns, such as marking just the handlers that caught, also work but lack the symmetry whih means there is more to remember. symmetry which means there is more to remember. \section{Conditional Catch} } \end{cfa} Note, catching @IOFailure@, checking for @f1@ in the handler, and reraising the exception if not @f1@ is different because the reraise does not examine any of Note, catching @IOFailure@, checking for @f1@ in the handler, and re-raising the exception if not @f1@ is different because the re-raise does not examine any of remaining handlers in the current try statement. @return@ that causes control to leave the finally block. Other ways to leave the finally block, such as a long jump or termination are much harder to check, and at best requiring additional run-time overhead, and so are mearly and at best requiring additional run-time overhead, and so are mealy discouraged. this point.} The recommended way to avoid the abort is to handle the intial resumption The recommended way to avoid the abort is to handle the initial resumption from the implicate join. If required you may put an explicate join inside a finally clause to disable the check and use the local | 2022-10-06 13:20:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20160622894763947, "perplexity": 1631.743601350539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00015.warc.gz"} |
https://tex.stackexchange.com/questions?sort=newest&page=3398 | # All Questions
174,186 questions
14 views
### To search a specific histogram in root file
I have a huge root file i.e hist-sample.root in which there are thousands of histogrms. I was using to command to search a specific histogram. It was something like gDirectory->("Ptbb"). Something is ...
8 views
### Beamer, overlay and standalone
Short version Is it possible to use standalone to produce externally my figures with overlays and include them directly in my main document ? More details Using the answers of the question Beamer ...
10 views
### Listing code in beamer frame
I have a problem with the package listings in beamer, when used in a frame. I report below a MWE (actually, it gives an error message). If I don't use \begin{frame} \end{frame}, then the example below ...
18 views
### Frame with caption and image below
I want to create a slideshow. For some slides I want to have a caption above the image but the rest of the slide filled with the image. How is this possible? Maybe with \usepackage{tikz}? If so, could ...
37 views
### Crossed out red box fitting tightly around image
I can do \textcolor{red}{ \fbox{ \includegraphics{path} } } To get a red frame sitting tightly around an image inside a firgure environment. Now, is there a way to also get a red cross over ...
14 views
### java eclipse - how do i print out a statement [on hold]
how to priint statment plese? I try already print("hello") but it no wok. Help appresiated. english not my fist language so sorry for mistake.
31 views
### latex is not generating pdf due to unsupported characters such as 👗 [on hold]
Here is the sample code \begin{filecontents*}{example.eps} %!PS-Adobe-3.0 EPSF-3.0 %%BoundingBox: 19 19 221 221 %%CreationDate: Mon Sep 29 1997 %%Creator: programmed by hand (JK) %%...
39 views
### Tikz positioning above circle exact alignment
How can the balls b1 and b2 be aligned exactly above each other? Also, above=1cm of b1.center, anchor=center and all kinds of combinations with north - south, north east - south west does not yield ...
17 views
### change the default path where Latex looks for everything
I'm trying to throw different projects into one document that are in different folders. These folders include images and texfiles. I know that I can always include files with \input{} or \include{} ...
8 views
### Minted package does not recognize lexer inside listing
I have a problem with the minted package. I am using two different lexers, juliaand jlcon. The first one properly highlights code, but the second one only works if the code is not inside a ...
21 views
### How to remove reference items in IEEEtran using Zotero and Overleaf?
The \bibliographystyle{IEEEtran} in my IEEE Overleaf template produces: [1] D. C. Knill and A. Pouget, “The Bayesian brain: the role of uncertainty in neural coding and computation ”...
38 views
### How to typeset \left \right parenthesis in LuaLaTeX in 2019?
With pdflatex I often used \left( and \right) to scale the parenthesis of a function as in $\exp\left(\frac{a}{b}\right)$ But this method was not perfect, because the code is difficult to read ...
15 views
### How to add caption to my figures
Here in my work there are two figures in which i want to add captions as Figure (i) and Figure (ii) separately. MWE: \documentclass{article} \usepackage[ a4paper,top=1in,bottom=1in,left=0.7in,right=...
10 views
### centre multirow horizontally
Consider this example: \documentclass{article} \usepackage{makecell} \usepackage{multirow} \begin{document} \begin{tabular}{|l|r|l|} \multirow{2}{*}{Sample} & \multicolumn{2}{c|}{AA}\\ ...
18 views
### Tikz figure in Table
Here is my code: \documentclass{article} \usepackage{fullpage} \usepackage{pgf-pie} \usepackage{tabularx} \usetikzlibrary{decorations.text} \usepackage{ifthen} \begin{document} \begin{table}[ht] %\...
27 views
### tikz-feynman: edge labels
I would like to understand how can I label edges of diagrams as at this picture: I assume that it possible to label vertices and than move each label, but I do not know how. Can anyone gives some ...
8 views
### Error Message in Displaying Algorithm
I am writing a thesis, and there I have three files, one is main, another one is Chapter and the last one is a.tex. main.tex is using to call the chapter. I have written main algorithm in a.tex. So ...
69 views
### Which fixes from mathtools were merged to the AMS packages already?
mathtools aims to fix bugs and problems with the AMS package set. Quote from the AMS site http://www.ams.org/publications/authors/tex/amslatex: In 2016, LuaTeX version 1 was released, and ...
13 views
### Raising errors when using fontawesome5 and XeTeX in Debian
I have been trying to compile my CV using the command xelatex cv.tex on a Debian-based TeXlive. I normally run it successfully on a MacOS with MacTex using the package fontawesome5. However, when I ...
175 views
### Drawing a german abacus as in the books of Adam Ries
I am teaching a course in history of mathematics and would like to draw something like the following: The bullet points should be possible to draw on the lines and in between. Can anyone help me? ...
22 views
### format box around text like this?
Can you help me format like this. Thank you so much! Problem 1. Let ACB be a right-angled triangle with right angle CAB… (Problem is a counter)
18 views
### Specifying coordinates in tikzcd arrows with a foreach loop
I'm trying to use a foreach loop to draw a whole bunch of arrows in a tikzcd environment by using the foreach loop to specify coordinates but I can't get it to compile. Here's a minimal example of ...
22 views
### Can anyone provide code for table below? [on hold]
How can we write this table in latex
14 views
### Changing keys appearing on the final pdf
My current state is and I would like to change this appearance of cites with a single number to three characters juxtaposed with two numbers, each of which denotes authors and years of a paper, ...
18 views
### Left and right parentheses won't appear in equation
I have a problem with \left( and \right) not showing up in the equations in a document. I have used both \usepackage{amsmath}, \usepackage{amssymb} and \usepackage{mathtools}. I tried with these three ...
14 views
### Tikz-dependency and gb4e Alignment Issues
I am using tikz-dependency and gb4e inside one example. I want them to be centered, but I could not managed to do so. Here's my self advancement: \documentclass{article} \usepackage[utf8]{inputenc} \...
82 views
### First instead of 1 when referencing
I'm using an enumeration with labels assigned to the items. Referencing to the items it works fine, but in some cases I would like to have "1st" or even "first" instead of just "1", with the former ...
19 views
### Font question (MIT) [duplicate]
I'm not sure if I'm on the right forum for this. I've been a TeX user for awhile now, and I like to play around with different fonts. I was wondering what font they are using in the link below, and if ...
20 views
### How to reduce top margin?
How can I reduce the margin between the first line of my document and the headerline? \documentclass[12pt, a4paper]{article} \usepackage[ top=0.6in,bottom=0.6in,left=0.7in,right=0.5in]{geometry} \...
10 views
### Shifting or stretching a float
I have got a float, that is minimal too high for a page. Is it possible to either shift it up a bit, so that it fits between the head- and footsepline or to stretch it with, say 0.9? My float ...
26 views
### Nocite like command in biblatex?
I'm trying to have all my references printed on the PDF, the thing is that I only cite 1 of all the entries in the .bib file. And looking at the documentation on biblatex it says I have to use the ...
32 views
### Prevent empty equations numbering?
\begin{align} fdsafdsadf\\ \end{align} will display two equation numbers. The trailing \ starts a new line... it is useful to add them so one can easily add or remove equations... or even comment ...
10 views
### Package morefloats Error: Too many floats requested. }
I get the following error: Package morefloats Error: Too many floats requested. } while trying to run a "*.tex" file, which runs OK on my desktop, on my Mac. The IDE that I am using is Texstudio. ...
30 views
### Alignment issue in a titlepage
Here is what I am trying to achieve, but I am getting this, with this syntax, \documentclass[14pt, a4paper]{article} \setlength{\oddsidemargin}{0.5cm} \setlength{\evensidemargin}{0.5cm} \...
43 views
### How to TeX this glyph?
It is 0x20 in this table. http://www.math.union.edu/~dpvc/jsmath/symbols/cmr10.html According to amsfonts and the structure of .afm files (See Chapter 8 of https://www.adobe.com/content/dam/acom/en/...
23 views
### Consecutive equation numbers with different section numbers in the front
I am looking for commands that enable me to have consecutive equation numbers with different section numbers in the front. For example, when the last equation of section 2 is (2.18), the next equation ...
8 views
### Glossaries-extra: Adding glossaries package to “ClassicThesis” template by Dr. André Miede v. 4.6
I have been using LaTeX the "ClassicThesis" template by Dr. André Miede v. 4.6, for my thesis document. I require to add Lists of definitions, abbreviations, symbols to this template. I was trying ...
12 views
### How to change default build in kile to Lulatex?
Even I went to Settings->Config kile->Tools->Build and changed the QuickBuild tool to "LuaLatex+ViewPDF". Every time I save the file, it just keeps running pdflatex. It is not unusable but it is ...
20 views
### How to center multiple figures and their descriptions
I am writing a paper for one of my classes on Catalan numbers. I have a section where I'm trying to include three different figures that I would like to be centered with their corresponding ...
33 views
### ! Missing { inserted
\begin{align*} f_{y_1,y_2} (y_1,y_2) &= f__{X_1,X_2}(y_1, \frac{(1-y_1)}{2} - \sqrt{ \frac{y_2}{2} -\frac{{y_1}^{2}}{2} - \frac{(1-y_1)^{2}}{4}} ) \abs {J_1} + f__{X_1,X_2}(y_1, \frac{(1-y_1)}{2}...
24 views
### Include filename in a nested document
I have a large project split in multiple files. I'd like to get each subfile name in the pdf to monitor my document. From different posts on this forum i understand Currfile package could help by ...
34 views
### Summations of a summation without upper bound as a lemma
So I have a paper I'm writing for a class that requires me to include a proof. I have the proof all sorted out on paper but I'm having a very hard time figuring out where to begin in Latex. I need ...
18 views
### bibliography backreferences with hyperref: format individual page number
I would like to format the backreferences in italics when the citation is called from a custom command (but not otherwise). I've been playing with both \backrefxxx and \backrefalt inside the custom ...
41 views
### TikZ: Piping and instrumentation diagram (P&ID) shapes available?
To draw electrical wiring schemes in TikZ the nice package circuitikz is available. However, is there something similar existent for drawing piping and instrumentation diagrams? Example of such a ...
14 views
I'm having a problem integrating asmems4.bst supplied by my department for my thesis. Several of my references are websites and asmems4.bst doesn't output URLs in my bibliography. From an answer to ...
24 views
I have some problems with my table of contents in a LaTex document in Overleaf. I have added 6 subfiles that are appendices. They display as I want in ToC, but most of them are not clickable. This ...
67 views
### How to draw organometallic compounds?
I'm trying to draw this molecule using the chemfig package but honestly don't know where or how to start. Please help. Thank you. \documentclass{article} \usepackage{chemfig} \setcrambond{3pt}{}{} \...
23 views
### Disabling page numbering bottom of page
Newbie question here. I seem to have some trouble changing the page numbering. I want it in the header, instead of the bottom of the page. I got the header working, but I have been deleting and ... | 2019-04-25 12:39:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9591519236564636, "perplexity": 3366.0054823621604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578721441.77/warc/CC-MAIN-20190425114058-20190425140058-00171.warc.gz"} |
http://www.nbi.ku.dk/gravitational-waves/gravitational-waves.html | Back to home
Comments on our paper, 'On the time lags of the LIGO signals'
James Creswell, Sebastian von Hausegger, Andrew D. Jackson, Hao Liu, Pavel Naselsky
June 27, 2017
Our recent arXiv posting of arXiv:1706.04191 "On the time lags of the LIGO signals" has generated considerable interest - both positive and negative. This is understandable given the significance of the claimed discovery of gravitational waves resulting from the merger of two black holes. In our opinion, a discovery of this importance merited a genuinely independent analysis of the data. It was the aim of our manuscript to perform such an analysis using publicly available data from LIGO and methods that were as close as possible to those adopted by the LIGO team. We focussed our attention mainly on the first event, GW150914, with special attention to the time lag between the arrival times of the signal at the Hanford and Livingston detectors. In our view, if we are to conclude reliably that this signal is due to a genuine astrophysical event, apart from chance-correlations, there should be no correlation between the "residual" time records from LIGO's two detectors in Hanford and Livingston. The residual records are defined as the difference between the cleaned records and the best GW template found by LIGO. Residual records should thus be dominated by noise, and they should show no correlations between Hanford and Livingston. Our investigation revealed that these residuals are, in fact, strongly correlated. Moreover, the time delay for these correlations coincides with the 6.9 ms time delay found for the putative GW signal itself.
As a member of the LIGO collaboration, Ian Harry states that he "tried to reproduce the results quoted in 'On the time lags of the LIGO signals'", but that he "[could] not reproduce the correlations claimed in section 3". Subsequent discussions with Ian Harry have revealed that this failure was due to several errors in his code. After necessary corrections were made, his script reproduces our results. His published version was subsequently updated. Regarding the results presented here, we also release a version of our script for comparison.
The process of separating signal and noise reliably is always difficult, but it is particularly challenging when the noise is neither gaussian nor stationary as in the present case. A thorough understanding of "cross talk" between the detectors is essential to ensure that cleaning techniques lead to reliable signal extraction. In the following we will describe a safe analysis based on publicly available LIGO data. This analysis will reveal a strong correlation between the Hanford and Livingston residuals.
We hope that interested people will repeat our calculations and will make up their own minds regarding the significance of the results. It is obvious that "belief" is never an alternative to "understanding" in physics.
Cross-correlations
The event GW150914 is characterized by its shape and its almost simultaneous appearance in the Hanford and Livingston detectors with a time lag of only $6.9$ ms. Here, we briefly review a method for confirming this time lag with the aid of cross-correlations to later apply it to the residual noise in the immediate vicinity of GW150914.
We denote the strain data, $H(t)$ and $L(t)$, within a given time interval $t_a\leq t\leq t_b$ as $H_{t_a}^{t_b}$, and $L_{t_a}^{t_b}$, respectively. Shifting a selected piece of Livingston data with respect to the Hanford data by a time lag $\tau$ allows for the calculation of the cross-correlation between the two records as a function of $\tau$. Due to the non-stationarity of the signal, we wish to only include values within a selected range, ensured by the method sketched below. Thus, our assumption is that within a sufficiently small time window the residuals behave in a stationary manner.
Figure 1: Illustration of the procedure for computing the cross-correlation between Hanford and Livingston as a function of the time lag $\tau$.
Using the scheme above, we define the cross-correlation coefficient as $$C(t,\tau,w) = {\rm Corr}(H_{t+\tau_0 + \tau}^{t-\tau_0+\tau+w},L_{t+\tau_0}^{t-\tau_0+w}),$$ where $\tau_0$ is chosen to ensure that only values within the selected range $[t, t+w]$ are included. (We note that the GW150914 signal appeared first at the Livingston site and was seen at the Hanford site approximately $6.9$ ms later. Thus, the equation above has been written so that $\tau$ is positive for GW150914. We will restrict the time delay to $-10 \leq\tau\leq 10$ ms since this is the only region of physical interest for gravitational wave detection. Therefore we choose $\tau_0 = 10$ ms.) Here, ${\rm Corr}(x,y)$ is the standard Pearson cross-correlation function between records $x$ and $y$ defined to lie within the window, $w$: $${\rm Corr}(x_{t+\tau}^{t+\tau+w},y_{t}^{t+w}) = \frac{{\rm Cov}(x_{t+\tau}^{t+\tau+w},y_{t}^{t+w})}{\sqrt{{\rm Cov}(x_{t+\tau}^{t+\tau+w},x_{t+\tau}^{t+\tau+w}) \cdot {\rm Cov}(y_{t}^{t+w},y_{t}^{t+w})}},$$ where ${\rm Cov}(x,y)$ is the usual covariance defined as ${\rm Cov}(x,y)=\langle (x-\langle x \rangle)(y - \langle y \rangle) \rangle$ and $\langle ... \rangle$ is the average within the window considered.
Residual noise
We begin our search for correlations in the residual noise with a simple cross-correlation test (hereafter CC-test) between Hanford and Livingston records using the data provided by the LIGO collaboration:
https://losc.ligo.org/s/events/GW150914/P150914/fig1-residual-H.txt
https://losc.ligo.org/s/events/GW150914/P150914/fig1-residual-L.txt
In the 0.2 s record including the GW event, the residual noise components in these records are calculated as $H_n=H-H_{\rm tpl}$ and $L_n=L-L_{\rm tpl}$. Here, $H_{\rm tpl}$ and $L_{\rm tpl}$ are the templates cleaned in precisely the same manner as the raw data to yield the cleaned records, $H$ and $L$. In order to highlight general properties of the cleaned GW150914 signal, the numerical relativity templates and the residuals, before turning to the CC-test, we show below the Fourier amplitudes for these various components for both Hanford and Livingston detectors, and briefly describe their peculiarities.
Figure 2: Left panel: The power spectrum for the Hanford GW150914 cleaned record (red), as well as the corresponding best-fit template (black) and the residuals (blue). Right panel: The same as the left panel but for the Livingston record with lines colored as in the left panel.
First, for both detectors, the templates and the cleaned data show a pronounced peak in the Fourier amplitudes near $50$ Hz that is associated with the band-pass filtering that selects the region of $35$ to $300$ Hz. Second, the amplitudes of the Hanford residuals decrease rapidly for $f<70$ Hz, while the Livingston residual amplitudes have a maximum value near $40$ Hz, which coincides with a peak in the cleaned record. Third, as is readily seen from the figures, for frequencies $f > 270$ Hz, the cleaned data is dominated by the residuals for both the H and L detectors. Moreover, we find amplitudes in the cleaned records that are substantially lower than those in the templates in, e.g. the frequency range $100 < f < 150$ Hz. It is especially worth to note this peculiarity in Hanford where the amplitudes of the cleaned data even fall below those of the residuals at frequencies around $70$ Hz and $120$ Hz. It should be kept in mind, however, that these "anomalies" describe properties of the residuals for the individual detectors and do not necessarily lead to cross-correlations between the H and L residuals. This "cross-talk" of these records is the subject of our paper and is summarized below.
The CC-test for the restricted time domain $\pm 10$ ms
The cross-correlation of $H_n$ and $L_n$ is calculated as a function of the time delay $\tau$. The constraint $|\tau|\le 10$ ms is dictated by the physical conditions for the arrival of a GW. All correlations are calculated in the time domain as described in the section above on cross-correlations, namely:
• Choose a certain time range $[t, t+w]$.
• Shift the Livingston record by one bin, two bins, etc. (each corresponding to a time delay of 1/16384 s) in order to calculate the cross-correlation function $C(t, \tau, w)$. (The normalisation of the cross-correlation function is dependent on $\tau$.)
The figures below, equivalent to Fig. 7 in our paper, show the H and L time records and residuals after subtraction of the numerical relativity template. The left panel shows the original data, and the right panel shows the Livingston data shifted by 7 ms and inverted. The shaded area marks the range 0.39 to 0.43 seconds within which we compute the cross-correlation function presented below.
Figure 3: Top panels: The Hanford (blue) and Livingston (red) records and their residuals after subtraction of the respective templates before (left) and after (right) shifting the Livingston record by 7 ms and inverting it. Bottom panel: The cross-correlation $C(t, \tau, w)$ for the noise records as a function of the time delay $\tau$ for various time records as indicated in the legend.
As seen, the calculation can be repeated for a variety of time intervals indicated in the legend. The figure reveals a surprisingly large cross correlation of $-0.81$ for the optimal window of 0.39 to 0.43 s. (The fact that this value is negative is a consquence of the fact that the Livingston signal should be inverted.) This is the region in which the "chirp" effect is most pronounced. We note that the cross-correlation between the Hanford and Livingston residuals has a magnitude greater than $0.12$ for the four ranges shown above. It is even more significant that a strong negative correlation is obtained for a time delay of approximately 7 ms for each of the time windows considered. While we did not further elaborate on this in our present paper, we will make investigations of significance the subject of a forthcoming study.
It should be noted that the template used here is the maximum-likelihood waveform. However, a family of such waveforms can be found that fit the data equally well. (See e.g. the panels in second row of Fig. 1 in LIGO's detection paper of GW150914.) In order to provide a rough estimate of this uncertainty, we have also explored the possibility of a free $\pm10\%$ scaling of the amplitude of the templates, so that $H_n=H-(1 \pm 0.1)H_{\rm tpl}$ and $L_n=L-(1 \pm 0.1)L_{\rm tpl}$. (This crude estimate of the uncertainties will obviously change both the magnitudes and phases of the Fourier amplitudes of the residuals.) The resulting cross correlations are virtually identical to the results shown above. Given that the residual noise is significantly greater than the uncertainty introduced by the family of templates, this is result is not surprising.
It would appear that the 7 ms time delay associated with the GW150914 signal is also an intrinsic property of the noise. The purpose in having two independent detectors is precisely to ensure that, after sufficient cleaning, the only genuine correlations between them will be due to gravitational wave effects. The results presented here suggest this level of cleaning has not yet been obtained and that the identification of the GW events needs to be re-evaluated with a more careful consideration of noise properties.
We hope that our comment will improve the undestanding of the major results in our paper. We are thankful to Alessandra Buonanno and Ian Harry for scientific discussions, and for making their Python script available to us and the scientific community. | 2017-07-21 16:58:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7194917798042297, "perplexity": 562.5804665584367}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423787.24/warc/CC-MAIN-20170721162430-20170721182430-00041.warc.gz"} |
https://forum.azimuthproject.org/discussion/comment/18816/ | #### Howdy, Stranger!
It looks like you're new here. If you want to get involved, click one of these buttons!
Options
# About "category" in everyday language
"Category" is a pretty common word in everyday language. For example:
1. On the left side of this forum we see the grouping "Categories" which subsume various hyperlinked items such as "All Categories", "Applied Category Theory Course", "Applied Category Theory Exercise", etc.
2. In English dictionaries, we see examples like "the various categories of research" (Oxford), "taxpayers fall into one of several categories" (Merriam-Webster), "I wouldn't put this book in the same category as the author's first novel" (Wiktionary), etc.
3. In other disciplines (less everyday), such as Aristotle's categories in philosophy, grammatical categories in linguistics, etc.
I wondered if such non-mathematical uses of "category" could be given some category-theoretic modeling. Considering we already have a category Cat for all categories, might there also a category for all uses of the word "category"? Or is it simply part of Cat?
I'm guessing such a modeling, if possible, wouldn't involve too much technical complication - perhaps just sets - but it'd be interesting to have a CT perspective to the issue. :-)
• Options
1.
edited May 2018
I wondered if such non-mathematical uses of "category" could be given some category-theoretic modeling. Considering we already have a category Cat for all categories, might there also a category for all uses of the word "category"? Or is it simply part of Cat?
Sadly, I don't think so. Category is just a term that mathematicians made up. It's an overloaded term, however. Aristotle used it one way as you mention. Categorical distributions in statistics refer to something else. Kant's categorical imperative is something yet different still.
Keeping the terminology straight is hard. When a computer scientist talks about a network topology, she doesn't mean general topology. She usually means a graph structure. When a cryptographer talks about lattice cryptography, they don't care about lattices from lattice theory. And λ-calculus isn't the calculus we learned in high school.
Comment Source:> I wondered if such non-mathematical uses of "category" could be given some category-theoretic modeling. Considering we already have a category Cat for all categories, might there also a category for all uses of the word "category"? Or is it simply part of Cat? Sadly, I don't think so. *Category* is just a term that mathematicians made up. It's an overloaded term, however. Aristotle used it one way as you mention. [Categorical distributions](https://en.wikipedia.org/wiki/Categorical_distribution) in statistics refer to something else. [Kant's categorical imperative](https://en.wikipedia.org/wiki/Categorical_imperative) is something yet different still. A similar confusing word thrown around is *monad*. It started as an idea in ancient Greek philosophy. Parmenides thought of his *monad* as the totality of all things. In Leibniz's [*Monadology* (1714)](https://en.wikipedia.org/wiki/Monadology) he was referring to sort of elementary, indivisible substance. Centuries later, the mathematician Abraham Robinson took inspiration from Leibniz's approach to calculus in devising [non-standard analysis](https://en.wikipedia.org/wiki/Non-standard_analysis). Robinson called one mathematical construct he invented a [*monad*][monad], but it has nothing to do with Leibniz's monads as far as I can tell. The term is so confusing that Goldblatt just calls them *halos* in his 1998 text *Lectures on the Hyperreals*. Saunders Maclane coined *monad* as it is commonly used in category theory about a decade later. It has nothing to do with Robinson's monad. In the late 80s Philip Wadler introduced monads to Haskell to model side effects and it's been baffling programmers ever since. And yet still there's [*monadic first order logic*](https://en.wikipedia.org/wiki/Monadic_predicate_calculus) where *monadic* is just Greek for every predicate can take at most one argument. Keeping the terminology straight is hard. When a computer scientist talks about a network topology, she doesn't mean [general topology](https://en.wikipedia.org/wiki/General_topology). She *usually* means a graph structure. When a cryptographer talks about [lattice cryptography](https://en.wikipedia.org/wiki/Lattice-based_cryptography), they don't care about lattices from [lattice theory][lattice theory]. And λ-calculus isn't the calculus we learned in high school. [monad]: https://en.wikipedia.org/wiki/Monad_(non-standard_analysis) [lattice theory]: https://en.wikipedia.org/wiki/Lattice_(order)
• Options
2.
edited May 2018
Thanks for the very informative comment, @Matthew! Indeed, very often disciplinary concepts could have been given less overloaded terminology (e.g. I can totally feel your "calculus" point!). Apart from this perhaps socio-historical matter, though, what about the non-technical usage of the word "category" (in the sense of a class/type I guess)? Since many people don't have training in CT/philosophy/logic, the terminological overloading is irrelevant for them, but the "category" in their vocabulary - due to its sense - may still correspond to something mathematically modelable, e.g. as a set or an equivalence class, and since when a certain "category" is mentioned it is usually presupposed that several distinct "categories" exist on the given matter (e.g. a book may belong to one "category" or another), perhaps what's at issue here is the organization of "categories" per se (hope I'm not talking nonsense...)?
On that note, I realize my initial question about the entire class of different uses of a word (of which "category" is an example) could easily drive me to the direction of lexical semantics, so perhaps it wasn't a meaningful pursuit in our CT context anyway... (or is it?) :P
Comment Source:Thanks for the very informative comment, @Matthew! Indeed, very often disciplinary concepts could have been given less overloaded terminology (e.g. I can totally **feel** your "calculus" point!). Apart from this perhaps socio-historical matter, though, what about the non-technical usage of the word "category" (in the sense of a class/type I guess)? Since many people don't have training in CT/philosophy/logic, the terminological overloading is irrelevant for them, but the "category" in their vocabulary - due to its sense - may still correspond to something mathematically modelable, e.g. as a set or an equivalence class, and since when a certain "category" is mentioned it is usually presupposed that several distinct "categories" exist on the given matter (e.g. a book may belong to one "category" or another), perhaps what's at issue here is the organization of "categories" per se (hope I'm not talking nonsense...)? On that note, I realize my initial question about the entire class of different uses of a word (of which "category" is an example) could easily drive me to the direction of [lexical semantics](https://en.wikipedia.org/wiki/Lexical_semantics), so perhaps it wasn't a meaningful pursuit in our CT context anyway... (or is it?) :P
• Options
3.
I believe Lawvere tried to connect the meanings across philosophy and math. The same with "natural transformation" and "functor".
In fact, the words "category", "functor", and "natural transformation" were swooped from philosophy if memory serves me.
Comment Source:I believe Lawvere tried to connect the meanings across philosophy and math. The same with "natural transformation" and "functor". In fact, the words "category", "functor", and "natural transformation" were swooped from philosophy if memory serves me.
• Options
4.
@Keith Yeah I noticed the terminological overlapping between CT and philosophy/linguistics too - I had initially thought we borrowed from CT but it turned out to be the other way around. :P Do you perhaps know where Lawvere made the philosophy–math connection? I would be interested to read on that. :)
Comment Source:@Keith Yeah I noticed the terminological overlapping between CT and philosophy/linguistics too - I had initially thought we borrowed from CT but it turned out to be the other way around. :P Do you perhaps know where Lawvere made the philosophy–math connection? I would be interested to read on that. :)
• Options
5.
I don't recall off hand the exact text that came from.
Comment Source:I don't recall off hand the exact text that came from.
• Options
6.
edited June 2018
I agree with Matthew that pushing the parallelism is a bad idea. In Aristotle I recall that categories were supreme genera, maximal elements in an order with resemblance to proper classes in set theory but not much fruitful analogy with modern ones comes to mind. In the case of syntactic categories in liguistics the situation is worse because it seeds confusion: some times one needs to speak simultaneously of syntactic categories in the context of a mathematically categorical treatment as did Lambek and there are interferences when merging two traditions (linguists and mathematicians). I think now linguists say categorial and mathmen categorical to mark distinctions, as in combinatory categorial grammar.
Comment Source:I agree with Matthew that pushing the parallelism is a bad idea. In Aristotle I recall that categories were supreme genera, maximal elements in an order with resemblance to proper classes in set theory but not much fruitful analogy with modern ones comes to mind. In the case of syntactic categories in liguistics the situation is worse because it seeds confusion: some times one needs to speak simultaneously of syntactic categories in the context of a mathematically categorical treatment as did Lambek and there are interferences when merging two traditions (linguists and mathematicians). I think now linguists say categorial and mathmen categori**cal** to mark distinctions, as in combinatory categorial grammar.
• Options
7.
Matthew, I'm an imperative languages guy (did some lisp ages ago...) but find itchy that in your enumeration you seem to put Haskellite monads in other shelf than the category-theory ones. I had the vague impression that in Haskell there was more or less the intention to reflect the mathematical idea (as in say Moggi), even if in some dialectal form or obeying language servitudes.
Comment Source:Matthew, I'm an imperative languages guy (did some lisp ages ago...) but find itchy that in your enumeration you seem to put Haskellite monads in other shelf than the category-theory ones. I had the vague impression that in Haskell there was more or less the intention to reflect the mathematical idea (as in say Moggi), even if in some dialectal form or obeying language servitudes.
• Options
8.
How about this one concerning the definition of functor -- in Prolog, everything is defined as a structure or term of the form: functor(arg1, arg2, ..).
Functors are thus the Prolog building blocks to declaratively relate or map different categories.
Comment Source:How about this one concerning the definition of functor -- in Prolog, everything is defined as a structure or term of the form: functor(arg1, arg2, ..). Functors are thus the Prolog building blocks to declaratively relate or map different categories.
• Options
9.
Aw yes, now I remember. It was on nlab's category theory page.
https://ncatlab.org/nlab/show/category+theory
Specifically:
Now the discovery of ideas as general as these is chiefly the willingness to make a brash or speculative abstraction, in this case supported by the pleasure of purloining words from the philosophers: “Category” from Aristotle and Kant, “Functor” from Carnap (Logische Syntax der Sprache), and “natural transformation” from then current informal parlance.
(Saunders Mac Lane, Categories for the Working Mathematician, 29–30).
Comment Source:Aw yes, now I remember. It was on nlab's category theory page. https://ncatlab.org/nlab/show/category+theory Specifically: >Now the discovery of ideas as general as these is chiefly the willingness to make a brash or speculative abstraction, in this case supported by the pleasure of purloining words from the philosophers: “Category” from Aristotle and Kant, “Functor” from Carnap (Logische Syntax der Sprache), and “natural transformation” from then current informal parlance. >(Saunders Mac Lane, Categories for the Working Mathematician, 29–30).
• Options
10.
edited October 2018
A point of history, Wadler did not introduce monads in PL's, Moggi and Plotkin did.
Comment Source:A point of history, Wadler did not introduce monads in PL's, Moggi and Plotkin did.
• Options
11.
@Keith Yeah I came across that paragraph too when reading Mac Lane's textbook. :-)
It's been so long since I last logged in (due to overwhelming workload...) but now that I have a bit more time I'm determined to catch up with everything that happened here while I went on estivation! :P
Comment Source:@Keith Yeah I came across that paragraph too when reading Mac Lane's textbook. :-) It's been so long since I last logged in (due to overwhelming workload...) but now that I have a bit more time I'm determined to catch up with everything that happened here while I went on estivation! :P
• Options
12.
Matthew, I'm an imperative languages guy (did some lisp ages ago...) but find itchy that in your enumeration you seem to put Haskellite monads in other shelf than the category-theory ones. I had the vague impression that in Haskell there was more or less the intention to reflect the mathematical idea (as in say Moggi), even if in some dialectal form or obeying language servitudes.
Comment Source:> Matthew, I'm an imperative languages guy (did some lisp ages ago...) but find itchy that in your enumeration you seem to put Haskellite monads in other shelf than the category-theory ones. I had the vague impression that in Haskell there was more or less the intention to reflect the mathematical idea (as in say Moggi), even if in some dialectal form or obeying language servitudes. Haskell monads are just monads on the category Hask (which is a haskell programing term for the somewhat nebulous category of haskell types and functions). | 2019-07-15 20:00:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7319441437721252, "perplexity": 2619.6763185576715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524111.50/warc/CC-MAIN-20190715195204-20190715221204-00209.warc.gz"} |
https://www.marefa.org/%D8%B6%D8%AF_%D9%85%D8%AA%D9%88%D8%A7%D8%B2%D9%8A_%D8%A7%D9%84%D8%A3%D8%B6%D9%84%D8%A7%D8%B9 | ضد متوازي الأضلاع
ضد متوازي الأضلاع Antiparallelogram هو رباعي أضلاع يكون فيه كل ضلعين غير متجاورين متطابقين، ويكون (على عكس متوازي الأضلاع) يكون كل زوج من الأضلاع المتقابلة متقاطعان.
ضد متوازي الأضلاع
إذا كان طول ضلعين متجاورين يحققان النسبة ${\displaystyle {\sqrt {2}}:1}$ فيكون مركز الضلعين المتقابلين يشكلان شكل رمز اللانهاية.
An antiparallelogram is a special case of a crossed quadrilateral, which has generally unequal edges.[1] A special form of the antiparallelogram is a crossed rectangle, in which two opposite edges are parallel.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
الخصائص
Every antiparallelogram has an axis of symmetry through its crossing point. Because of this symmetry, it has two pairs of equal angles as well as two pairs of equal sides.[2] Together with the kites and the isosceles trapezoids, antiparallelograms form one of three basic classes of quadrilaterals with a symmetry axis. The convex hull of an antiparallelogram is an isosceles trapezoid, and every antiparallelogram may be formed from the non-parallel sides (or either pair of parallel sides in case of a rectangle) and diagonals of an isosceles trapezoid.[3]
Every antiparallelogram is a cyclic quadrilateral, meaning that its four vertices all lie على دائرة واحدة.
في متعددات الأسطح
The small rhombihexahedron. Slicing off any vertex of this shape gives an antiparallelogram cross-section as the vertex figure.
The small rhombihexacron, a polyhedron with antiparallelograms (formed by pairs of coplanar triangles) as its faces.
Several nonconvex uniform polyhedra, including the tetrahemihexahedron, cubohemioctahedron, octahemioctahedron, small rhombihexahedron, small icosihemidodecahedron, and small dodecahemidodecahedron, have antiparallelograms as their vertex figures, the cross-sections formed by slicing the polyhedron by a plane that passes near a vertex, perpendicularly to the axis between the vertex and the center.[4]
For uniform polyhedra of this type in which the faces do not pass through the center point of the polyhedron, the dual polyhedron has antiparallelograms as its faces; examples of dual uniform polyhedra with antiparallelogram faces include the small rhombihexacron, the great rhombihexacron, the small rhombidodecacron, the great rhombidodecacron, the small dodecicosacron, and the great dodecicosacron. The antiparallelograms that form the faces of these dual uniform polyhedra are the same antiparallelograms that form the vertex figure of the original uniform polyhedron.
Bricard octahedron constructed as a double pyramid over an antiparallelogram.
One form of a non-uniform but flexible polyhedron, the Bricard octahedron, can be constructed as a double pyramid over an antiparallelogram.[5]
The antiparallelogram has been used as a form of four-bar linkage, in which four rigid beams of fixed length (the four sides of the antiparallelogram) may rotate with respect to each other at joints placed at the four vertices of the antiparallelogram. In this context it is also called a butterfly or bow-tie linkage. As a linkage, it has a point of instability in which it can be converted into a parallelogram and vice versa.
Fixing the short edge of an antiparallelogram linkage causes the crossing point to trace out an ellipse.
الميكانيكا السماوية
In the n-body problem, the study of the motions of point masses under Newton's law of universal gravitation, an important role is played by central configurations, solutions to the n-body problem in which all of the bodies rotate around some central point as if they were rigidly connected to each other. For instance, for three bodies, there are five solutions of this type, given by the five Lagrangian points. For four bodies, with two pairs of the bodies having equal masses (but with the ratio between the masses of the two pairs varying continuously), numerical evidence indicates that there exists a continuous family of central configurations, related to each other by the motion of an antiparallelogram linkage.[6]
المراجع
2. ^ خطأ استشهاد: وسم <ref> غير صحيح؛ لا نص تم توفيره للمراجع المسماة round | 2021-08-01 17:51:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8833125233650208, "perplexity": 498.0136092755957}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154214.63/warc/CC-MAIN-20210801154943-20210801184943-00547.warc.gz"} |
https://www.semanticscholar.org/paper/Towards-universal-topological-quantum-computation-%CE%BD-Freedman-Nayak/313a26b8ab274517d8211dcf2dd0a1dcf50250e8 | # Towards universal topological quantum computation in the ν = 5 2 fractional quantum Hall state
@article{Freedman2006TowardsUT,
title={Towards universal topological quantum computation in the $\nu$ = 5 2 fractional quantum Hall state},
author={Michael H. Freedman and C. Nayak and Kevin Walker},
journal={Physical Review B},
year={2006},
volume={73},
pages={245307}
}
• Published 5 December 2005
• Physics
• Physical Review B
The Pfaffian state, which may describe the quantized Hall plateau observed at Landau level filling fraction $\ensuremath{\nu}=\frac{5}{2}$, can support topologically-protected qubits with extremely low error rates. Braiding operations also allow perfect implementation of certain unitary transformations of these qubits. However, in the case of the Pfaffian state, this set of unitary operations is not quite sufficient for universal quantum computation (i.e. is not dense in the unitary group). If…
74 Citations
Non-Abelian Anyons and Topological Quantum Computation
• Physics
• 2008
Topological quantum computation has emerged as one of the most exciting approaches to constructing a fault-tolerant quantum computer. The proposal relies on the existence of topological states of
Braid matrices and quantum gates for Ising anyons topological quantum computation
• Physics
• 2010
Abstract We study various aspects of the topological quantum computation scheme based on the non-Abelian anyons corresponding to fractional quantum hall effect states at filling fraction 5/2 using
Quantum origami: Transversal gates for quantum computation and measurement of topological order
• Mathematics
Physical Review Research
• 2020
It is demonstrated that multi-layer topological states with appropriate boundary conditions and twist defects allow modular transformations to be effectively implemented by a finite sequence of local SWAP gates between the layers, and methods to directly measure the modular matrices, and thus the fractional statistics of anyonic excitations are provided, providing a novel way to directlyMeasure topological order.
Parafermions in a Kagome lattice of qubits for topological quantum computation
• Physics
• 2015
Engineering complex non-Abelian anyon models with simple physical systems is crucial for topological quantum computation. Unfortunately, the simplest systems are typically restricted to Majorana zero
Fractionalizing Majorana fermions: non-abelian statistics on the edges of abelian quantum Hall states
• Physics
• 2012
We study the non-abelian statistics characterizing systems where counter-propagating gapless modes on the edges of fractional quantum Hall states are gapped by proximity-coupling to superconductors
Measurement-only quantum computation with Floquet Majorana corner modes
• Physics
• 2020
Majorana modes, typically arising at the edges of one-dimensional topological superconductors, are considered to be a promising candidate for encoding nonlocal qubits in fault-tolerant quantum
A Blueprint for a Topologically Fault-tolerant Quantum Computer
• Physics, Computer Science
• 2010
A schematic blueprint for a fully topologically-protected Ising based quantum computer is provided, which may serve as a starting point for attempts to construct a fault-tolerant quantum computer, which will have applications to cryptanalysis, drug design, efficient simulation of quantum many-body systems, searching large databases, engineering future quantum computers, and -- most importantly -- those applications which no one in the authors' classical era has the prescience to foresee. | 2022-05-22 00:59:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6552414298057556, "perplexity": 1878.7351616553294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543264.49/warc/CC-MAIN-20220522001016-20220522031016-00657.warc.gz"} |
http://www.topperlearning.com/forums/ask-experts-19/please-give-me-a-mathematical-proof-of-the-following-express-physics-motion-in-a-plane-54808/reply | Question
Mon April 16, 2012 By: Avi Wadhwa
# please give me a mathematical proof of the following expression- vector 'a' cross vector 'b' is equal to modulus of vector 'a' dot modulus of vector 'b' dot 'sin c' (i wrote down the question in words rather than in the form of mathematical expression..) where 'c' is the angle between vector 'a' and vector 'b'..?
Tue April 17, 2012
The cross product a × b is defined as a vector c that is perpendicular to both a and b, with a direction given by the right-hand rule and a magnitude equal to the area of theparallelogram that the vectors span.
The cross product is defined by the formula
where ? is the measure of the smaller angle between a and b (0° ? ? ? 180°), |a| and |b| are the magnitudes of vectors a and b, and n is a unit vector perpendicular to the plane containing a and b in the direction given by the right-hand rule as illustrated. If the vectors a and b are parallel (i.e., the angle ? between them is either 0° or 180°), by the above formula, the cross product of a and b is the zero vector 0.
The direction of the vector n is given by the right-hand rule, where one simply points the forefinger of the right hand in the direction of a and the middle finger in the direction of b. Then, the vector n is coming out of the thumb (see the picture on the right). Using this rule implies that the cross-product is anti-commutative, i.e., b × a = ?(a × b). By pointing the forefinger toward b first, and then pointing the middle finger toward a, the thumb will be forced in the opposite direction, reversing the sign of the product vector.
Using the cross product requires the handedness of the coordinate system to be taken into account (as explicit in the definition above). If a left-handed coordinate system is used, the direction of the vector n is given by the left-hand rule and points in the opposite direction.
Related Questions
Sun December 25, 2016
# discuss parallelogram law of vector addition.find the expression for resultant vector using it.
Sat December 24, 2016 | 2017-01-18 01:49:18 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8155233263969421, "perplexity": 484.0248164721709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00527-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://brilliant.org/problems/inspired-by-tamirat-solomon/ | # Inspired by tamirat solomon
what is the common value of n for which n belongs to natural numbers $297^n+396^n=495^n$
× | 2017-05-29 21:06:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.432536244392395, "perplexity": 832.4688540447403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612553.95/warc/CC-MAIN-20170529203855-20170529223855-00367.warc.gz"} |
https://dc.uwm.edu/etd/1812/ | ## Theses and Dissertations
#### Title
Compactifications of Manifolds with Boundary
August 2018
Dissertation
#### Degree Name
Doctor of Philosophy
#### Department
Mathematics
Craig R Guilbault
#### Committee Members
Peter Hinow, Chris Hruska, Boris Okun, Allen Bell
#### Abstract
This dissertation is concerned with compactifications of high-dimensional manifolds.
Siebenmann's iconic 1965 dissertation \cite{Sie65} provided necessary and
sufficient conditions for an open manifold $M^{m}$ ($m\geq6$) to be
compactifiable by addition of a manifold boundary. His theorem extends easily
to cases where $M^{m}$ is noncompact with compact boundary; however when
$\partial M^{m}$ is noncompact, the situation is more complicated. The goal
becomes a \textquotedblleft completion\textquotedblright\ of $M^{m}$, ie, a
compact manifold $\widehat{M}^{m}$ containing a compactum $A\subseteq\partial M^{m}$ such that $\widehat{M}^{m}\backslash A\approx M^{m}$. Siebenmann did
some initial work on this topic, and O'Brien \cite{O'B83} extended that work
to an important special case. But, until now, a complete characterization had
yet to emerge. Here we provide such a characterization.
Our second main theorem involves $\mathcal{Z}$-compactifications. An important
open question asks whether a well-known set of conditions laid out by Chapman
and Siebenmann \cite{CS76} guarantee $\mathcal{Z}$-compactifiability for a
manifold $M^{m}$. We cannot answer that question, but we do show that those
conditions are satisfied if and only if $M^{m}\times\lbrack0,1]$ is
$\mathcal{Z}$-compactifiable. A key ingredient in our proof is the above
Manifold Completion Theorem---an application that partly explains our current
interest in that topic, and also illustrates the utility of the $\pi_{1}%$-condition found in that theorem. Chapter \ref{Chapter 1} is based on joint work with Professor Craig Guilbault \cite{GG17}.
At last, we obtain a complete characterization of pseudo-collarable $n$-manifolds for $n\geq 6$. This extends earlier work by Guilbault and Tinsley to allow for manifolds with noncompact boundary. In the same way that their work can be viewed as an extension of Siebenmann's dissertation that can be applied to manifolds with non-stable fundamental group at infinity, Pseudo-collarability Characterization Theorem can also be viewed as an extension of Manifold Completion Theorem in a manner that is applicable to manifolds whose fundamental group at infinity is not peripherally stable.
COinS | 2019-01-24 06:41:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6402464509010315, "perplexity": 1719.0890869318935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584519382.88/warc/CC-MAIN-20190124055924-20190124081924-00018.warc.gz"} |
https://www.research.ed.ac.uk/portal/en/publications/measurement-of-differential-cross-sections-of-isolatedphoton-plus-heavyflavour-jet-production-in-pp-collisions-at-sqrts8-tev-using-the-atlas-detector(0c06d602-b025-4937-98a6-09251f1fa948).html | Measurement of differential cross sections of isolated-photon plus heavy-flavour jet production in pp collisions at $\sqrt{s}=8$ TeV using the ATLAS detector
Research output: Contribution to journalArticle
Open
Documents
Original language English 295-317 Physics Letters B B776 https://doi.org/10.1016/j.physletb.2017.11.054 Published - 10 Jan 2018
Abstract
This Letter presents the measurement of differential cross sections of isolated prompt photons produced in association with a b-jet or a c-jet. These final states provide sensitivity to the heavy-flavour content of the proton and aspects related to the modelling of heavy-flavour quarks in perturbative QCD. The measurement uses proton-proton collision data at a centre-of-mass energy of 8 TeV recorded by the ATLAS detector at the LHC in 2012 corresponding to an integrated luminosity of up to 20.2 fb$^{-1}$. The differential cross sections are measured for each jet flavour with respect to the transverse energy of the leading photon in two photon pseudorapidity regions: $|\eta^\gamma|<1.37$ and $1.56<|\eta^\gamma|<2.37$. The measurement covers photon transverse energies $25 < E_\textrm{T}^\gamma<400$ GeV and $25 < E_\textrm{T}^\gamma<350$ GeV respectively for the two $|\eta^\gamma|$ regions. For each jet flavour, the ratio of the cross sections in the two $|\eta^\gamma|$ regions is also measured. The measurement is corrected for detector effects and compared to leading-order and next-to-leading-order perturbative QCD calculations, based on various treatments and assumptions about the heavy-flavour content of the proton. Overall, the predictions agree well with the measurement, but some deviations are observed at high photon transverse energies. The total uncertainty in the measurement ranges between 13% and 66%, while the central $\gamma+b$ measurement exhibits the smallest uncertainty, ranging from 13% to 27%, which is comparable to the precision of the theoretical predictions. | 2019-11-22 21:26:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7784528136253357, "perplexity": 1102.990836916372}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671548.98/warc/CC-MAIN-20191122194802-20191122223802-00447.warc.gz"} |
https://www.physicsforums.com/threads/northern-component-of-velocity.184221/ | # Northern Component of velocity
1. Sep 12, 2007
### anglum
Components of velocity
1. The problem statement, all variables and given/known data
An airplane travels at 146 km=h toward the
northeast.
What is the northern component of its ve-
locity? Answer in units of km=h.
2. Relevant equations
asquared + bsquared = csquared
3. The attempt at a solution
however i am not sure if u use the pythagorean formula to solve this?
Last edited: Sep 12, 2007
2. Sep 12, 2007
### drpizza
Yes, you're on the right track. (Assuming by "North East" they mean that 45 degree angle between north and east. That means NorthEast would be the hypotenuse/resultant of a two vectors: 1 north and 1 east of equal magnitudes.)
3. Sep 12, 2007
### Staff: Mentor
First, the plane speed is 146 km/h (not km=h).
Next, you find the components in the north and east directions by multiplying the hypoteneuse by the sine or cosine of the appropriate angles....
4. Sep 12, 2007
### anglum
ooo i have to do sin/cosine of the angle
5. Sep 12, 2007
### anglum
so to find the horizontal velocity it would be the cos of 45 = vx/ 146? which equals 76.699 km/h
and to get the vertical it would be sin of 45 = vy/146? which equals 124.231 km/h
Last edited: Sep 12, 2007
6. Sep 12, 2007
### Staff: Mentor
Yes. That's an unusual way to write it, however. More like this (I'll use latex):
$$v_y = 146 km/h * sin(45)$$
7. Sep 12, 2007
### Staff: Mentor
Well, except for the math you did. The sin and cos of 45 degrees should be the same....
8. Sep 12, 2007
### anglum
ok thanks guys i got it i was doing some bad math... thank you
Last edited: Sep 12, 2007 | 2017-12-14 16:01:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5770305395126343, "perplexity": 3923.7946421505026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948544677.45/warc/CC-MAIN-20171214144324-20171214164324-00255.warc.gz"} |
https://www.nature.com/articles/s41598-022-18821-5?error=cookies_not_supported&code=a712925e-58eb-45e7-8a2a-1b736774d319 | ## Introduction
With the recent advances in machine vision and its applications, there is a growing demand for sensor hardware that is faster, more energy-efficient, and more sensitive than frame-based cameras, such as charge-coupled devices (CCDs) or complementary metal–oxide–semiconductor (CMOS) imagers1,2. Beyond event-based cameras (silicon retinas)3,4, which rely on conventional CMOS technology and have reached a high level of maturity, there is now increasing research on novel types of image acquisition and data pre-processing techniques5,6,7,8,9,10,11,12,13,14,15,16,17,18, with many of them emulating certain neurobiological functions of the human visual system.
One image pre-processing technique, that is being used since decades, is pixel binning. Binning is the process of combining the electric signals from $$K$$ adjacent detector elements into one larger pixel. This offers benefits such as (1) increased frame rate due to a $$K$$-fold reduction in the amount of output data, and (2) an up to $$K^{1/2}$$-fold improvement in signal-to-noise ratio (SNR) at low light levels or short exposure times19. The latter can be understood from the fact that dark noise is collected in normal mode for every detector element, but in binned mode only once per $$K$$ elements. Binning, however, comes at the expense of reduced spatial resolution or, in more general terms, loss of information. In pattern recognition applications this reduces the accuracy of the results even if the SNR is high.
Here, we push the concept of binning to its limit by combining a large fraction of the sensor elements into a “superpixel” whose optimal shape is determined from training data using a machine learning algorithm. We demonstrate the classification of optically projected images on an ultrashort timescale, with enhanced dynamic range and without loss of classification accuracy.
## Results and discussion
### Pixel binning
In Fig. 1 we schematically depict different types of binning and its impact on the classification accuracy of an artificial neural network (ANN). Besides the aforementioned conventional approach (orange lines), we also illustrate our concept of data-driven binning (green line). There, a substantial fraction of pixels are combined into a “superpixel” that extends over the whole face of the chip, thus forming a large-area photodetector with a complex geometrical structure that is determined from training data. For multi-class classification with one-hot encoding, one such superpixel is required for each class. As for conventional binning, the system becomes more resilient towards noise and its dynamic range increases. However, for large light intensities there is no loss of classification accuracy and hence no compromise in performance, in contrast to the conventional case. These benefits come at the cost of less flexibility, as a custom configuration/design is required for each specific application.
### Photosensor implementation
Figure 2a shows a schematic of our photosensor, employing data-driven binning. A microscope photograph of the actual device implementation is shown in Fig. 2b. For details regarding the fabrication, we refer to the “Methods” section. The device consists of $$N$$ pixels, arranged in a two-dimensional array. Each pixel is divided into at most $$M$$ subpixels that are connected–binned–together to form the $$M$$ superpixels, whose output currents are measured. Each detector element is composed of a GaAs Schottky photodiode (Fig. 2c) that is operated under short-circuit conditions (Fig. 2d) and exhibits a photoresponsivity of $$R = I_{{{\text{SC}}}} /P \approx$$ 0.1 A/W, where $$I_{{{\text{SC}}}}$$ is the photocurrent and $$P$$ the incident optical power. GaAs was chosen because of its short absorption and diffusion lengths, which both reduce undesired cross-talk between adjacent pixels; with some minor modifications the sensor can also be realized using Si instead of GaAs. The design parameters, that depend on the specific classification task and are determined from training data, are the geometrical fill factors $$f_{{{\text{mn}}}} = A_{{{\text{mn}}}} /A$$ for each of the subpixels, where $$A_{{{\text{mn}}}}$$ denotes the subpixel area and $$A$$ is the total area of each pixel. From Fig. 2a, we find for the $$m$$ output currents $$I_{{\text{m}}} = R\mathop \sum \limits_{{{\text{n}} = 1}}^{{\text{N}}} f_{{{\text{mn}}}} P_{{\text{n}}}$$, or
$${\mathbf{i}} = R{\mathbf{Fp}},$$
(1)
with $${\mathbf{p}} = \left( {P_{1} ,P_{2} , \ldots ,P_{{\text{N}}} } \right)^{T}$$ being a vector that represents the optical image projected onto the chip, $${\mathbf{i}} = \left( {I_{1} ,I_{2} , \ldots ,I_{{\text{M}}} } \right)^{T}$$ the output current vector, and $${\mathbf{F}} = \left( {f_{{{\text{mn}}}} } \right)_{{{\text{M}} \times {\text{N}}}}$$ a fill factor matrix that depends on the specific application. The $$m$$-th row of $${\mathbf{F}}$$ is a vector $${\mathbf{f}}_{{\text{m}}} = \left( {f_{{{\text{m}}1}} ,f_{{{\text{m}}2}} , \ldots ,f_{{{\text{m}}N}} } \right)^{T}$$ that represents the geometrical shape of the $$m$$-th superpixel.
### Naïve Bayes photosensor
Let us now discuss how to design the fill factor matrix for a specific image recognition problem. As an instructive example, we present the classification of handwritten digits (‘0’, ‘1’, …, ’9’) from the MNIST dataset20 by evaluating the posterior $${\mathbb{P}}\left( {y_{{\text{m}}} {|}{\mathbf{p}}} \right)$$ (the probability $${\mathbb{P}}$$ of an image $${\mathbf{p}}$$ being a particular digit $$y_{{\text{m}}}$$) for all classes and selecting the most probable outcome. By applying Bayes' theorem and further assuming that the features (pixels) are conditionally independent, one can derive a predictor of the form $$\hat{y}_{{\text{m}}} = {\text{arg max}}_{{{\text{m}} \in \left\{ {1 \ldots {\text{M}}} \right\}}} {\mathbb{P}}\left( {y_{{\text{m}}} } \right) \mathop \prod \limits_{{{\text{n}} = 1}}^{{\text{N}}} {\mathbb{P}}\left( {P_{{\text{n}}} {|}y_{{\text{m}}} } \right)$$, known as Naïve Bayes (NB) classifier21,22. We use a multinomial event model $${\mathbb{P}}\left( {P_{{\text{n}}} {|}y_{{\text{m}}} } \right) = \pi_{{{\text{mn}}}}^{{P_{{\text{n}}} }}$$, where $$\pi_{{{\text{mn}}}}$$ is the probability that the $$n$$-th pixel for a given class $$y_{{\text{m}}}$$ exhibits a certain brightness and express the result in log-space to obtain a linear discriminant function
$$\hat{y}_{{\text{m}}} = \mathop {\text{arg max}}\limits_{{{\text{m}} \in \left\{ {1 \ldots {\text{M}}} \right\}}} \left( {{\mathbf{Wp}} + {\mathbf{b}}} \right)_{{\text{m}}}$$
(2)
with weights $$w_{{{\text{mn}}}} = \log \pi_{{{\text{mn}}}}$$. The bias terms $$b_{{\text{m}}} = \log {\mathbb{P}}\left( {y_{{\text{m}}} } \right)$$ can be omitted ($${\mathbf{b}} = 0$$), as all classes are equiprobable. The similarity to Eq. (1) allows us to map the algorithm onto our device architecture: $${\mathbf{F}} \propto {\mathbf{W}}$$. To match the calculated $$w_{{{\text{mn}}}}$$-value range to the physical constraints of the hardware implementation,
$$0 \le f_{{{\text{mn}}}} \le 1 \ {\text{ and }} \ \mathop \sum \limits_{{\text{m}}} f_{{{\text{mn}}}} \le 1,$$
(3)
we normalize the weights according to
$$f_{{{\text{mn}}}} = \frac{{w_{{{\text{mn}}}} - \min w_{{{\text{mn}}}} }}{{\max \mathop \sum \nolimits_{{\text{m}}} (w_{{{\text{mn}}}} - \min w_{{{\text{mn}}}} )}}.$$
(4)
In Fig. 3a we exemplify the working principle of the photosensor. A sample $${\mathbf{p}}$$ from the MNIST dataset is optically projected onto the chip using the measurement setup shown in Fig. 3b (see “Methods” section for experimental details). Each of the $$M$$ superpixels generates a photocurrent $$I_{{\text{m}}}$$ proportional to the inner product $${\mathbf{f}}_{{\text{m}}}^{T} {\mathbf{p}}$$. If we visualize $${\mathbf{f}}_{{\text{m}}}$$ for each class (Fig. 3c), we obtain an intuitive result: The shape of each superpixel resembles that of the average-looking digit for the respective class. It is apparent that the superpixel with the largest spatial overlap with the image delivers the highest photocurrent.
Figure 3e shows experimental photocurrent maps for the device in Fig. 2b. Here, each pixel of the sensor is illuminated individually and the output currents are recorded. The currents are proportional to the designed fill factors in Fig. 3c (apart from device imperfections such as broken lithographic connections), confirming negligible cross-talk between neighbouring subpixels. To evaluate the performance, we projected all 104 digits from the MNIST test dataset and recorded the sensor’s predictions. The classification results are presented as a confusion matrix in Fig. 3f. The chip is able to classify digits with an accuracy that closely matches the theoretical result in Fig. 3d.
### Artificial neural network photosensor
Beyond the instructive example of NB, the same device structure also allows the implementation of other, more accurate, classifiers. Specifically, we present the design and simulation results for a single-layer ANN21 for the same MNIST classification task as discussed before. In Fig. 4a the architecture of the network is shown. It makes its predictions according to
$$\hat{y}_{{\text{m}}} = \mathop {\text{arg max}}\limits_{{{\text{m}} \in \left\{ {1 \ldots {\text{M}}} \right\}}} \sigma \left( {{\mathbf{Wp}} + {\mathbf{b}}} \right)_{{\text{m}}}$$
(5)
Note the similarity to Eq. (2), apart from a nonlinearity $$\sigma$$ which can be readily implemented, either in the analogue or the digital domain, using external electronics. We choose a softmax activation function for $$\sigma$$. Again, due to the physical constraints of the sensor hardware, we train the network with bias $${\mathbf{b}} = 0$$ using categorical cross-entropy loss. In order to obey Eq. (3), we further introduce a constraint that enforces a non-negative weight matrix $${\mathbf{W}}$$ by performing the following regularization after each training step:
$${\mathbf{W}} \leftarrow {\mathbf{W}} \odot \theta \left( {\mathbf{W}} \right),$$
(6)
with $$\odot$$ denoting the Hadamard product and $$\theta$$ the Heaviside step function. This leads to a < 1% penalty in accuracy.
The fill factor matrix $${\mathbf{F}}$$, plotted in Fig. 4d, is directly related to $${\mathbf{W}}$$ by a geometrical scaling factor. Although the superpixel shapes do not clearly resemble the handwritten digits, the ANN shows better performance than the NB classifier, as demonstrated by the confusion matrix in Fig. 4b. In addition, the ANN shows a larger spread between the highest and all other output currents (Fig. 4c), which makes it more robust against noise (Supplementary Figure S2). A number of other machine learning algorithms can be described by an equation of the form (5) and can be implemented in a similar fashion. Also the realization of an all-analogue deep-learning network is feasible by feeding the sensor output into a memristor crossbar array24,25.
### Benefits of data-driven binning
In Fig. 5 we demonstrate the benefits of data-driven binning. It is evident that the readout of $$M$$ photodetector signals requires less time, resources, and energy than the readout of the whole image in a conventional image sensor. In fact, the photodiode array itself does not consume any energy at all; energy is only consumed by the electronic circuit that selects the highest photocurrent. Pattern recognition and classification occur in real-time and are only limited by the physics of the photocurrent generation and/or the electrical bandwidth of the data acquisition system. This is demonstrated in Fig. 5a, where we show the correct classification of an image on a nanosecond timescale, limited by the bandwidth of the used amplifier.
Furthermore, it is known that binning can offer an $$K^{1/2}$$-fold improvement in SNR19. In our case, a substantial fraction $$\xi$$ ($$\sim$$ 0.6 for NB) of all sensor pixels are binned together ($$K = \xi N$$), with each pixel being split into $$M$$ elements. Together, this results in a $$\left( {\xi N} \right)^{1/2} /M$$-fold SNR gain over the unbinned case. To characterize the noise performance, we performed binary image classification (NB, MNIST, ‘0’ versus ‘1’) at different light intensities. For the reference measurements, we projected the images sequentially, pixel by pixel, onto a single GaAs Schottky photodetector (fabricated on the same wafer and with an area identical to that of two subpixels), recorded the photocurrents, and performed the classification task in a computer. In the simulations, Gaussian noise was added by drawing random samples from a normal distribution $${\mathcal{N}}\left( {0,\sigma^{2} } \right)$$ with zero mean value. The noise was added once per superpixel in the data-driven case, and per each pixel in the reference case. $$\sigma$$ was used as a single fitting parameter to reproduce all experimental results. The results are presented in Fig. 5b. The classification accuracy is affected by the amplifier noise. For large intensities, the system operates with its designed accuracy. As the intensity is decreased, the classification accuracy drops and eventually, when the noise dominates over the signal, reaches the baseline of random guessing. Our device, employing data-driven binning, can perform this task at lower light intensities than the reference device without binning.
## Conclusions
We conclude with proposed routes for future research. The main limitation of our current device implementation is its lack of reconfigurability. While this may be appropriate in some cases (e.g. a dedicated spectroscopic application), reconfigurability of the sensor would in general be preferred. This may, for example, be achieved by employing photodetectors with tunable responsivities, or a programmable network based on a nonvolatile memory material26,27,28 to bin individual pixels together. Other schemes than standard one-hot encoding may allow to save hardware resources and extend the dynamic range further. Possible applications of our technology include industrial image recognition systems that require high-speed identification of simple objects or patterns, as well as optical spectroscopy, where the incoming light is dispersed into its different colors and the sensor is trained to recognize certain spectral features. In both cases classical machine learning algorithms will provide sufficient complexity and sophistication for the approximation of the dataset.
## Methods
### Device fabrication
Device fabrication started with the growth of a 400 nm thick $${\mathrm{n}}^{-}$$-doped ($${10}^{16}$$ $${\mathrm{cm}}^{-3}$$) GaAs epilayer by molecular beam epitaxy on a highly $${\mathrm{n}}^{+}$$-doped GaAs substrate. An ohmic contact on the $${\mathrm{n}}^{+}$$-side was defined by evaporation of Ge/Au/Ni/Au (15 nm/30 nm/14 nm/300 nm) and sample heating at 440 °C for 30 s. On the $${\mathrm{n}}^{-}$$-GaAs epilayer we deposited a 20 nm thick Al2O3 insulating layer by atomic layer deposition (ALD). We then defined a first metal layer (M1) by electron-beam lithography (EBL) and Ti/Au (3 nm/25 nm) evaporation. In the next step we deposited a 30 nm thick Al2O3 layer by ALD. We then defined an etch mask for the via holes, which connect metal layers M1 and M2, by EBL and etched the Al2O3 with 30% potassium hydroxide (KOH) aqueous solution. We then wrote an etch mask for the pixel windows via EBL and etched the aggregated 50 nm thick Al2O3 with a 30% KOH aqueous solution in two steps. Inside the pixel windows, we defined the subpixels with EBL by removing the naturally formed oxide on the GaAs substrate with a 37% hydrochloric acid (HCl) aqueous solution and evaporating 7 nm thick semitransparent Au. Finally, we defined the M2 metal layer with EBL and Ti/Au (5 nm/80 nm) evaporation. The continuity and solidity of the device was confirmed by scanning electron microscopy and electrical measurements.
### Experimental setup
A schematic of the experimental setup is shown in Fig. 3b. A light-emitting diode (LED) source (625 nm wavelength) illuminates, through a linear polarizer, a spatial light modulator (SLM). The SLM is operated in intensity-modulation mode and changes the polarization of the reflected light according to the displayed image. The reflected light is then filtered using a second linear polarizer, and the image is projected onto the chip. The photocurrents generated by the sensor are probed with a needle array, selected by a Keithley switch matrix and measured with a Keithley source measurement unit. For time-resolved measurements a pulsed laser source (522 nm wavelength, 40 ns) is used. Here, the output signals are amplified with a high-bandwidth (20 MHz) transimpedance amplifier. The pulsed laser source is triggered with a signal generator and an oscilloscope is used to record the time trace. | 2023-03-29 20:42:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6488107442855835, "perplexity": 1046.104663020749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949025.18/warc/CC-MAIN-20230329182643-20230329212643-00123.warc.gz"} |
https://yale.universitypressscholarship.com/view/10.12987/yale/9780300100877.001.0001/upso-9780300100877-chapter-4 | ## Alan Blinder
Print publication date: 2004
Print ISBN-13: 9780300100877
Published to Yale Scholarship Online: October 2013
DOI: 10.12987/yale/9780300100877.001.0001
Show Summary Details
Page of
PRINTED FROM YALE SCHOLARSHIP ONLINE (www.yale.universitypressscholarship.com). (c) Copyright Yale University Press, 2021. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in YSO for personal use. Subscriber: null; date: 14 April 2021
# Following the Leader: The Central Bank and the Markets
Chapter:
(p.65) Chapter 3 Following the Leader: The Central Bank and the Markets
Source:
The Quiet Revolution
Publisher:
Yale University Press
DOI:10.12987/yale/9780300100877.003.0007
# Abstract and Keywords
This chapter focuses on the complex and evolving relationship between central banks and the financial markets. It argues that central banks, which used to pride themselves on lording it over the markets, have been showing them increasing deference in recent years. Some modern central bankers seem to have become so deeply respectful of markets that a new danger is emerging: that monetary policymakers might be tempted to “follow the markets” slavishly, essentially delivering the monetary policy which the markets expect or demand. The chapter argues that many more central bankers now view the markets as repositories of great power and wisdom—as sages to be listened to rather than merely as forces to be reckoned with. However, when it comes to foreign exchange operations, central banks still strive to catch the markets off guard—even to push them around, if possible.
Qver the past decade or so, central bank independence has been the subject of a vast outpouring of academic literature,1 a great deal of real-world debate in political and public policy circles, and a substantial amount of legal and institutional change in a wide variety of countries (not including the United States). Alex Cukierman (1998), who knows more about this subject than almost anyone else, has taken note of the strong worldwide trend toward greater central bank independence: Since 1989, more than two dozen countries have increased the independence of their central banks substantially. It seems that none have moved in the other direction. This trend has been widely applauded by economists, and for good reasons. Greater central bank independence appears to be associated statistically with superior macroeconomic performance,2 although questions have been raised about the direction of causation.3
Just about 100 percent of this voluminous academic and realworld attention to central bank independence has been devoted to independence from politics and, in particular, from political influence over monetary policy. But there is another kind of independence that at least some strains of modern central banking may actually be endangering: independence from the financial markets.
(p.66) What? you say. How can a central bank be independent of the markets when its policy actions work through markets, and when many of its most important indicators are market prices of one sort or another? Both objections are, of course, correct. When I speak of independence from the markets, I do not mean that central banks should either ignore market signals or rely on nonmarket methods for implementing monetary policy (for example, quantitative credit controls). What I do mean is that some modern central bankers seem to have become so deeply respectful of markets that a new danger is emerging: that monetary policymakers might be tempted to “follow the markets” slavishly, essentially delivering the monetary policy that the markets expect or demand.4
# Who's the Boss?
It was not always thus. If I may be forgiven for indulging in stereotypes for a moment, the older tradition viewed the central bank more as the stern schoolmarm disciplining the markets when they got out of line than as the eager and respectful student studying at the markets' knee. As both a positive and normative matter, there was little question about who was the leader and who was the follower. Part of the central banker's job was believed to be surprising or even bullying the markets.
Attitudes today are radically different. Modern central bankers pay rapt attention to what markets think they are up to—often as embodied in futures prices that flash across the Bloomberg screen in real time. Normally, the bank is loathe to deviate from what the markets expect it to do. This is a two-way street, of course. Markets today scrutinize the central banks with an intensity once reserved for the Kremlin; and “don't fight the Fed” is a well-ingrained piece of Wall Street wisdom. But the point I want to make is that many more central bankers now than previously view the markets as repositories (p.67) of great power and wisdom—as sages to be listened to rather than merely as forces to be reckoned with. As I will discuss later, there is one major exception to this new rule: When it comes to foreign exchange operations, central banks still strive to catch the markets off guard—even to push them around, if possible. But by and large, the evolving norm of behavior is just the opposite. Central banks often listen to the markets in both senses of that verb.
For the most part, this is a healthy development. Investors and traders do, after all, back up their beliefs with huge amounts of money—although skeptics will note that most of it is OPM (other people's money). Furthermore, as we economists are fond of pointing out, market prices succinctly summarize the collective wisdom of a vast number of people with diverse beliefs and access to different information. To believe that you can outwit the markets on a regular basis requires an extreme hubris that few if any modern central bankers have. Nor should they have it.
That said, I'd like to raise a question about whether the pendulum may perhaps have swung too far, whether the roles of leader and follower may have been reversed too sharply, whether what began as a healthy respect for markets may be in danger of devolving into worship of a false hero. In brief, whether the quiet revolution in modern central banking may have taken a good thing too far.5
I have one particular hazard in mind. Imagine a stereotypical monetary policy committee that scrutinizes the term structure of interest rates or the futures markets or both, observes what the markets expect it to do on a meeting-by-meeting basis, and then delivers precisely that policy. Such behavior may sound like the proper outcome of central bank transparency that I discussed and extolled in chapter 1. But it is actually something quite different. A fully transparent central bank keeps the markets well informed, teaches them about its way of thinking, and offers appropriate clues to its future behavior—thereby making the markets better predictors (p.68) of the central banks' decisions. But in all these ways, the monetary authorities are leading the markets to what they see as the right conclusions. My concern is with a central bank that follows the markets rather than leads them.
On the surface, it might seem that following the markets should produce a pretty good policy record. After all, the resulting decisions would embody the aforementioned collective wisdom, which presumably far exceeds the combined wisdom represented on any monetary policy committee. But I fear that following the markets might lead to rather poor policy nonetheless—for several reasons.
One is that speculative markets tend to run in herds and to overreact to almost any stimulus,6 whereas central bankers need to be cautious and to act prudently. I have often had occasion to cite Blinder's Law of Speculative Markets, which is based on a truly colossal amount of armchair empiricism. It is this: When they respond to news, the markets normally get the direction right but exaggerate the magnitude by a factor between three and ten. (Three connotes calm markets, ten, volatile ones.) Central banks, by contrast, must not get carried away. They should have the personality of Alan Greenspan, not of Jim Morrison.
I am aware that Bob Shiller's provocative original work on overreaction in the stock and bond markets (Shiller 1979, 1981) spawned a huge literature, some of which establishes that Shiller-like evidence may not actually demonstrate that markets typically overreact. There are alternative explanations of the data that are consistent with the efficient markets hypothesis. Maybe the daily dose of news absorbed by the stock market really does change the present discounted value of future dividends by about 1 percent—and by 23 percent on that fateful October day in 1987! (If you believe that, I have an efficiently priced dot.com I'd like to sell you.)
Much of this debate is highly technical, and this is neither the time nor the place to summarize, extend, or adjudicate it. Nor am I (p.69) the person best qualified to do so. Let me just point out that Ptolemaic astronomy also had its ingenious defenders, whose cleverly constructed epicycles held back the Copernican tide for a while. As you can probably tell, I cast my vote with Shiller—as, by the way, do most market participants. As Fischer Black, who lived successfully in both worlds, once put it, markets look far more efficient from the banks of the Charles than from the banks of the Hudson.7
Taking it as a fact that financial markets frequently overreact, an interesting question is why. One presumptive explanation is herding behavior. Even people from Connecticut and New Jersey know this much about cattle and sheep: that while they may be individually rational, the behavior of the herd sometimes produces results that, shall we say, stray pretty far from group rationality. Just as lemmings follow their leaders over cliffs, the seventeenth-century Dutch placed their faith in tulip bulbs, and the early-eighteenth-century French followed John Law into oblivion. Lest we think that modern-day Americans are vastly more sophisticated than those simpletons of yore, it was not so very long ago that gullible investors scrambled to fork over literally unbelievable sums for shares of Internet companies that never had any realistic ideas about how to make money. (Remember the idea of “paying for eyeballs”?—that is, valuing companies on the number of website “hits” rather than on profits or even sales.)
There is by now a substantial theoretical and empirical literature on herding, most of which—having been produced by economists—pertains to what might be called rational herding, that is, cases in which A follows B for reasons that are perfectly (individually) rational.8 For example, herding might be based on the belief that others have valuable private information. I learn that Warren Buffet has bought a stock, believe it is because he knows something I don't know, and therefore buy the stock myself. That may be quite rational. But if everyone emulates Buffet, the stock price may get (p.70) pumped up way beyond anything that Buffet's private information (if he really has any) can plausibly justify. Other models of rational herding are based on the reputations of fund managers and the way they are compensated (for example, it may pay to stay with the pack). In such cases, rational behavior by individuals may (but need not) lead to inefficient market outcomes.
But models of rational behavior may not capture the most important reasons for the herding phenomenon we observe in real markets. For example, so-called momentum investing—which means buying a stock just because it has recently gone up—is plainly irrational by standard definitions because, in efficient markets, stock prices are supposed to approximate random walks. Thus a recent run-up in the price of a share per se offers no reason to believe that above-normal returns will accrue to those who buy in today. Yet the existence of a substantial amount of momentum investing is well known anecdotally and has been documented by several scholars.9
Detecting herding empirically is a daunting task for several reasons. First, there may be good “fundamental” reasons for everyone to rush for the exits—or for the entrances—at once, without having contracted the urge just by watching others. Think about Enron stock once the bad news became public in 2001. Or Argentina's slowmotion default in 2000–01. Or, on the upside, the reaction of a pharmaceutical company's stock to its announcement of a new blockbuster drug. Investors are not necessarily acting like a herd just because many of them do the same thing at the same time. Second, devising a measure of herding is no simple task.10 Third, it may be next to impossible to distinguish between rational and irrational herding. On the other hand, this last distinction may not be too important for present purposes since either type of herding behavior can lead to overreaction in markets—the phenomenon that concerns us here.
Although I have dwelt on it, herding behavior is just one of (p.71) several possible explanations for systematic overreaction in speculative markets. Another, related explanation is that financial market participants frequently succumb to fads and fancies, producing speculative bubbles that may diverge sharply from fundamentals.11 If learning that buying shares of stupid-idea.com is the in thing to do makes you too long for shares in the company, then the market may display positive feedback loops that produce more volatility than the fundamentals can justify. But central bankers must steadfastly resist such whimsy and inoculate themselves against the faddish behavior that so often dominates markets. That may be why central bankers are not much fun at parties.
Last, but certainly not least, there is the nasty matter of time horizons. Homo economicus has a long (perhaps infinite) time horizon and a reasonable discount rate. So, I hope, do most central bankers. But traders in real-world financial markets seem to have neither. One might hope that Darwinian mechanisms would select for patient, long-term investors—but they do not appear to do so in markets that are dominated by daily mark-to-market, quarterly reporting, and compensation based on short-term performance.
Here is a stunning quantitative example that I have cited before.12 According to the standard theory of the term structure of interest rates, about which I will have critical things to say shortly, the thirty-year bond rate should be the appropriate average of the one-year bill rates that are expected to prevail over each of the next thirty years. Only one of these, today's one-year rate, is currently observed in the market. But the others—the so-called implied forward rates—can be inferred from the term structure. I will explain this in more detail shortly. But, for now, a simple example will do.
Suppose today's observed one-year interest rate is 3 percent and the two-year rate is 4 percent. Together, these imply that the one-year rate expected to prevail one year from now must be about 5 percent. The reason is straightforward. Investing in the two-year (p.72) bond will leave you with (1.04)2 two years later, after compounding. On the other hand, investing in two consecutive one-year bonds will be expected to earn you (1.03)(1+r), where r is the one-year rate expected to prevail a year from today. Since arbitrage dictates that the two returns must be more or less equal, r must be approximately 5 percent.13 Proceeding similarly, one can use the term structure to deduce all the implied forward rates, as I will demonstrate shortly. But now back to the time horizon issue.
One day in 1995, while serving as vice chairman of the Federal Reserve Board, I was wondering about bond market overreaction—which I thought I was witnessing all around me. So I asked the Fed staff to do a calculation. What, I inquired, is the correlation between daily changes in the one-year interest rate on U.S. Treasuries and daily changes in the implied one-year forward rate expected to prevail twenty-nine years from now? I was pretty sure I knew the theoretically correct answer to this question: essentially zero, because hardly any of the news that moves interest rates on a daily basis carries significant implications for interest rates twenty-nine years in the future. Modern-day Ptolemaists will, of course, insist that I am wrong about this. They will argue that things often happen that should have similar effects on both current interest rates and rates twenty-nine years from now. Oh? Name two.14
In any case, the statistical answer for the year 1994 was +0.54.15 Taken literally, this correlation means that you can explain 29 percent of the variance of changes in the one-year interest rate 10, 585 days from now by using nothing but today's change. Don't bet Yale's endowment, or even your own, on that!
So why is the correlation so high? My hunch, which I will develop in greater detail shortly, is that it reflects overreaction stemming from excessively short time horizons. The men and women who traded the thirty-year bond in 1994 (like the folks who do it today) were probably not thinking about the implications of various (p.73) bits of news for the economy in the year 2024. Indeed, many of them probably had a hard time getting their arms around the concept of thirty years, having not yet attained that august age themselves. Instead, I believe, traders were buying and selling the long bond as if it were a much shorter instrument.
If this hunch—and I admit that it is no more than that—is correct, there is a supreme irony here. One of the chief arguments for making central banks politically independent is that monetary policy requires a long time horizon, not the notoriously short time horizons of elected politicians. But if the central bank follows the market too slavishly, it will tacitly and inadvertently adopt the market's short time horizon as its own. Politicians may focus on the next election, which is bad enough. But bond traders may focus on their positions at the end of the trading day, or perhaps a half-hour from now, which is much worse. A politically independent central bank that follows the whims of the markets may thus wind up with an effective time horizon even shorter than that of a politician.
It is also very likely to overreact, just as the markets frequently do. Here is a simple example. Suppose something happens that should, on rational grounds, induce the central bank to raise interest rates, but only very slightly—say, by 25 basis points. Perhaps the government issues a bad-looking inflation report for a single month, or something like that. The market sees this new information but exaggerates its significance. It therefore begins to embody expectations of, say, a 75-basis-point rate hike into asset prices. The central bank reads the market's expectations from the term structure and feels compelled to deliver something closer to what the market expects, say, 50 basis points, rather than “disappoint the markets?” In this instance, the central bank's reaction is twice as large as it should be. While this is an exaggerated example, it serves to make the general point: If markets overreact and central banks follow the markets, then central banks are likely to overreact, too.
(p.74) More analytically, my Princeton colleagues Ben Bernanke (who is now a Fed governor) and Michael Woodford (1997) have built a rational expectations model in which a central bank can create a multiplicity of equilibria by reacting to the market's forecast of inflation rather than to its own. They emphasize the importance of the central bank using its own inflation forecast rather than relying on the market's, which, in the context of their model, is the only way the bank can maintain its independence from the market.
# A Case in Point: The Term Structure of Interest Rates
I just used the term structure of interest rates to illustrate the general phenomenon of overreaction. This was no idle choice. In fact, one does not get very far in discussing central banking practice without mentioning the term structure. The so-called expectations theory of the term structure, to which I alluded earlier, is the vehicle almost always used to assess what the markets expect the central bank to do. It is therefore critical to communication between the central bank and the markets—in both directions. The question I want to deal with next is whether the markets are communicating well-considered wisdom to the central bankers or something rather less valuable. If the latter, of course, the central bank should listen rather selectively.
The role of the term structure is also central to the transmission mechanism of monetary policy, and for a very simple reason. Monetary policymakers generally have direct control over only the overnight interest rate. In the United States, that is the federal funds rate, the interest rate at which some banks lend reserves to others. At any one time, only a small minority of banks is active in this market. More fundamentally, as I noted earlier, no economic transaction of any importance takes place at the federal funds rate. If the Fed's (p.75) monetary policy is to succeed in influencing the interest rates and asset prices that really matter—such as loan rates, bond rates, exchange rates, and stock market valuations—then changes in the funds rate must somehow be transmitted to these other financial variables. The expectations theory of the term structure provides the standard linkage.
The theory itself starts with a simple arbitrage argument. As in the numerical example above, an investor can buy a two-period bond and hold it to maturity or buy a one-period bond and roll it over into another one-period bond when the first one matures. If each strategy has adherents, the expected returns on the two strategies must be equal. That means that, roughly,16 the two-period interest rate must be equal to the average of the two one-period rates—the first of which is actually observed in the marketplace today and the second of which is an (implicit) expectation. Using obvious symbols,17
(3.1)
$Display mathematics$
where r1,t and r2,t are, respectively, the one- and two-period interest rates prevailing at time t, and the superscript e indicates an expectation. In this case, re1,t+1 is the one-period rate expected to prevail one period from now. In the simple numerical example, the two-year rate was 4 percent, the one year rate was 3 percent, and we deduced that the one-year rate expected to prevail one year from now had to be 5 percent because 4% = ½(3% + 5%). Note that this relationship should hold whether time is measured in days, weeks, months, or years.
Similar relationships hold for three-period interest rates, four-period interest rates, and so on. Thus r1,t, r2,t, r3,t and so on—the constituent parts of what is called “the yield curve”18—depend crucially on expectations. If you think of time as measured in days, it is clear that the entire term structure should, in principle, be driven by (p.76) expectations of what future monetary policy will be. For example, the one-year interest rate should embody today's one-day rate and the next 364 expected one-day rates; the ten-year interest rate should embody the next 3,649 expected one-day rates (forgetting about leap years); and so on.
As we move out along the yield curve to longer maturities, a term premium—variously rationalized as a risk premium or a liquidity premium—is generally added to the right-hand sides of equations like (3.1) to represent what investors demand to be paid to compensate them for the higher risk or lower liquidity of longer-dated instruments. In practice, the use of these premiums sometimes borders on the tautological: If the two sides of an equation like (3.1) appear to move differently, you can always square the circle with the appropriate time-varying risk premium. But the main point for present purposes is that interest rates on medium- and long-term debt instruments should depend mainly on expectations of future central bank policy.
This little detour into term structure theory explains why expectations are so central to the monetary policy transmission mechanism. A Federal Reserve action that strongly affects expectations of future short-term interest rates will, according to the theory, have a much greater impact on long-term interest rates than an action that moves today's short rate but leaves expected future short rates largely unaffected.
Equations like (3.1) can be used to deduce the expected future short rates (called implied forward rates) that I mentioned before. For example, it follows immediately from equation (3.1) that
(3.2)
$Display mathematics$
The logic behind (3.2) is straightforward. If you earn the two-year rate for two years, you get (approximately) 2r2,t. If you earn the one-year rate for one year, you get r1,t. Equation (3.2) then answers the (p.77) question How much must investors be expecting to earn in the second year?
More complicated versions of (3.2) will produce any implied forward rate you want. For example, I will shortly look at the nine-year-ahead forecasts of the one-year interest rate, in which case:
(3.3)
$Display mathematics$
In words, if a prospective investor can earn r10,t annually for ten years with one strategy and r9,t annually for nine years with the other strategy, then she must be expecting to earn 10r10,t - 9r9,t in the remaining year. Since everything on the right-hand sides of equations like (3.2) and (3.3) can be observed directly in the market, the implied forward rates are easily computed, and financial specialists do so routinely.19
So far, so good. But here's the rub. The implied interest rate forecasts (expectations) that can be deduced from the yield curve bear little resemblance to what future interest rates actually turn out to be. There is no space here—and probably even less patience among readers—for a thorough review of the empirical evidence on the term structure of interest rates.20 Suffice it to say that the abject empirical failure of the expectations theory of the term structure of interest rates is a well-established fact.
I will offer just two kinds of simple evidence here. The first is for ordinary people—simple pictures. The second is designed for professionals—simple regressions. Both derive from the same sort of equation—versions of (3.2) and (3.3).
Look first at figure 3.1, which offers a kind of eyeball test of equation (3.3). The right-hand side of (3.3), which is observable every day in the market, is a forecast of the one-year rate expected to prevail nine years from today. We can assess the accuracy of such forecasts historically by comparing them to the one-year interest rates that actually obtained nine years later, provided we are willing (p.78)
Figure 3.1 Predicted vs. Actual One-Year Interest Rates
(p.79) to wait nine years. Figure 3.1 does precisely this, using monthly data on the yields on zero-coupon bonds over the period December 1949 to February 1991.21
On the horizontal axis, I plot the forecasted one-year bond rate nine years (that is, 108 months) later, computed from equation (3.3) for each month in the sample period. On the vertical axis, I plot the actual one-year rate 108 months later. In principle, the two should be equal, except for random forecasting errors. The straight line shown in the graph is not the best-fitting regression line, but rather a line with a slope of one—indicating the theoretically correct relationship.22 You do not need advanced training in statistics to see that the forecasts are pretty terrible. In fact, there appears to be little relationship between the two variables.
For those who do have such training, I offer two regressions. The first regresses the actual interest rate, r1,t+9, on the forecast, re1,t+9, as defined by equation (3.3), and tests the restriction that the slope coefficient is 1.0. Unsurprisingly given the picture, the null hypothesis is easily rejected. In fact, the resulting point estimate of the slope is just 0.27 (with standard error 0.037).23 The second comes from the existing literature—in particular, from a paper by John Campbell (1995, 139). It uses a variant of the term structure logic in which this month's yields on one-month and twelve-month zero-coupon bonds are used to forecast how the one-month yield should change over the ensuing eleven months.24 Once again, his equation has the feature that the estimated slope parameter would be 1.0 under the expectations theory. But his estimate is only 0.25 with standard error 0.21. That point estimate, which is quite close to mine, is significantly different from the theoretically correct value of 1.0 but not significantly different from zero.
In brief, what we seem to have found is that the expectations theory of the term structure performs miserably over moderate (one-year) and long (ten-year) time horizons.25 The empirical (p.80) failure of the expectations theory of the term structure raises an obviously interesting question: Why? Why does a theory that seems so obviously correct in principle work so poorly in practice?
There is another, equally fascinating question, however. The theory's abject failure is not some deep, dark secret that we professors know about but have somehow kept from the rest of the world. Central bankers realize that the expectations theory does not work. So do market participants, who nonetheless appear to use it to guide billion-dollar interest rate bets each day. Yet, in what appears to be a stunning example of pretending that the emperor is still fully dressed, academic economists, central bankers, and market participants alike all proceed as if the expectations theory really underpins the term structure. It's a curious case of mutually agreed selfdelusion, and the question is how and why it persists.
Freed, as I am in this book, from the heavy burden of peer review, I would like to suggest tentative answers to each of these questions.
First, why do experts continue to use the expectations theory of the term structure despite overwhelming evidence against it? My answer is that doing so is an act of desperation—they have no alternative. On a priori grounds, it is hard to understand how the expectations theory could be wrong. If expectations of future short rates do not determine long rates, then what does? I must admit that I have a hard time answering that question myself, and so I frequently find myself using the expectations theory to interpret the yield curve anyway. It's a bad habit that is hard to kick. Rarely has the old saw “it takes a theory to beat a theory” been leaned on so hard.
Second, why does the theory fail so miserably in explaining the facts? That may be the harder question. It is also the one most relevant to thinking through what the central bank can (or cannot) learn from the yield curve. I want to offer two candidate answers, while leaving the ultimate resolution of the issue, as usual, to the proverbial subsequent (p.81) research. The two answers are consistent with one another. Each denies that expectations are rational. And each explains why the implied forward rates—the expectations—are overly sensitive to current rates, and therefore why long rates overreact to short rates.
My first candidate answer, which I offer with some trepidation and only because both New Haven and Princeton are so far from the Great Lakes, was suggested earlier; but let me repeat it now. I believe that, when it comes to pricing long-term bonds, market participants do not peer as far into the future as the theory says they should. Instead, they are systematically myopic and extrapolative, treating and trading longer-term instruments as if they were much shorterdated instruments. One consequence is that the current situation and the latest news get far too much weight in setting today's longterm interest rates.
If this is so, then the amazingly high correlation between the one-year interest rate and the implied forward rate twenty-nine years from now becomes understandable. If traders treat the thirty-year bond as if it were, say, a three-year bond, then it is not hard to see why its price should respond strongly to short-term influences. Generalizing this example, we see that artificially short time horizons offer a straightforward explanation of Shiller's (1979) evidence for the overreaction of long rates to short rates.
The second explanation dispenses with rational expectations in a different way. A long-neglected paper by my Princeton colleague Gregory Chow, published in 1989, starts with the usual finding: The data he studies resoundingly reject the joint hypothesis that the expectations theory of the term structure holds and that expectations are (statistically) rational.26 Furthermore, the estimated parameters make no sense. Chow (1989) then inquires into which is the weak sister. His answer is clear. When he replaces the assumption of rational expectations with adaptive expectations, he finds that the estimated parameters in the term structure equation are reasonable (p.82) and that the joint hypothesis is not rejected. In other words, the expectations theory fails under rational expectations but works just fine under adaptive expectations.
Interesting. But how does that relate to the short-time-horizons idea? Simple. It turns out empirically that, compared to rational expectations, adaptive expectations place much greater weight on current short rates. In Chow's estimated example, under rational expectations a sustained 100-basis-point increase in the one-month rate has no effect on the twenty-year rate in the same month, only an 11-basis-point effect after three months, and only a 21-basis-point effect after six months.27 But under adaptive expectations, the contemporaneous reaction is 20 basis points, the three-month reaction is 33 basis points, and the six-month reaction is 45 basis points.28
In sum, relative to rational expectations, both adaptive expectations and, I would suggest, actual human behavior put far too much weight on current market conditions. This finding will not surprise anyone who has not been unduly influenced by advanced training in economics. And if it is true, delivering the monetary policy that is expected, if not indeed demanded, by the (myopic) markets could lead a too-compliant central bank down a primrose path. So this is one case—and an important case at that—in which it is important that the central bank not take its lead from the markets.29
# Another Case in Point: Uncovered Interest Rate Parity
An analogous problem with interpreting market signals—and imbuing them with too much wisdom—arises in an international context. Instead of thinking about the arbitrage-like relations that arise in choosing among instruments of different maturities, as we do in the term structure, now think about choosing among instruments denominated in different currencies (over the same maturity). To start (p.83) once again with a simple concrete example, suppose one-year U.S. Treasury bills are paying 4 percent in dollars at a time when equallysafe one-year German government bills are paying 3 percent in euro. The theory of uncovered interest rate parity is based on the following simple but compelling insight: If some investors choose the U.S. paper while others choose the German, then the two must have (approximately) equal expected yields—whether you measure that yield in dollars or in euro. For that to be the case, the euro must be expected to appreciate by 1 percent over the year relative to the dollar.
Let's be more precise. If you invest $100 in the U.S. paper, you will get back$104 at the end of the year, with certainty. Alternatively, you can (a) purchase 100 euros (using an exchange rate of $1 = 1 euro, and ignoring commissions), (b) invest that money at 3 percent to get back 103 euro after a year, and then (c) convert those euro into dollars at whatever exchange rate, X, then prevails. Doing all this will earn you 103/X dollars. A simple arbitrage-like argument says that, with risk-neutral investors, these two investment strategies must offer the same expected return—which is the basic insight underlying uncovered interest parity. In the specific example, 103/Xe must be approximately equal to$104, so that Xe must be 0.9904.30 Thus, for the 4 percent U.S. interest rate and the 3 percent German interest rate to coexist in financial market equilibrium, the euro must be expected to appreciate from $1 per euro to$1/0.9904=$1.0097 per euro, or by approximately 1 percent. Generalizing this simple example, uncovered interest rate parity states that, for two equally risky instruments denominated in different currencies but covering the same time period (any time period will do): (3.4) $Display mathematics$ where rd is the domestic-currency interest rate, rf is the foreign-currency interest rate, and xe is the expected rate of appreciation of (p.84) the foreign currency. (xe is negative if the foreign currency is expected to depreciate.) Equations like (3.4) tie interest rates and exchange rate expectations tightly together. The age-old question is, Which moves which? To see why this question is relevant to monetary policy, let's consider an application that is near and dear to the hearts of central bankers. Think of rd and rf as very short term interest rates, more or less controlled by the Fed and the ECB, respectively. Now suppose the Fed raises rd, but the ECB does not change rf. By the logic underlying (3.4), the expected change in the exchange rate must adjust upward. Specifically, the euro must now be expected to appreciate more or depreciate less than was the case just prior to the Fed's move. If the exchange rate expected to prevail a year from now is not changed by this event, as theoretical models generally assume, that means the dollar must rise now in order to create the expectation that it will fall later. On the other side of the Atlantic, of course, the ECB may be less than thrilled by the immediate depreciation of the euro. But, according to the logic behind equation (3.4), there is not much it can do about it—short of raising its domestic interest rate to match the Fed's. Like it or not, uncovered interest parity puts central banks in bed with one another. This is one-worldism in the extreme. Now to the bad news. Think of equation (3.4) as a way to forecast changes in the exchange rate. It says, for example, that the expected rate of appreciation of the euro must be equal to the interest rate differential.31 Thus, by examining interest rates here and abroad, you can read an implicit forecast of where exchange rates are expected to go from market prices—just as interest rates on short- and long-term debt imply a forecast of where short rates are expected to go. Thus, equation (3.4) is a market-based forecasting equation, similar in spirit to equation (3.2). Having read that, you can probably guess the punch line. Yes, the exchange rate forecasts implied by uncovered interest rate parity are truly terrible.32 Don't bet a nickel on them. (p.85) Once again, there is a substantial scholarly literature on this point. And, once again, I'll gloss over this literature and offer just two pieces of supporting information. The first is a pair of simple graphs comparing the forecasts of exchange rate changes based on interest rate differentials to the exchange rate changes that actually occurred. Figures 3.2 and 3.3 both use data on the dollar/yen exchange rate and interest rate data on government debt in Japan and the United States. In figure 3.2, I compare the actual three-month change in the exchange rate to the forecast implied by three-month Treasury bill rates in the two countries three months earlier. Not only is the relationship not tight, it is actually perverse: Amazingly, the yen typically depreciated when its short-term interest rate was below the U.S. rate. Figure 3.3 compares ten-year exchange rate forecasts derived from equation (3.4) with actual realizations. There appears to be virtually no relationship between the two. For readers who are regression-minded, notice that (3.4) suggests the following linear regression: (3.5) $Display mathematics$ A regression fitted to the data underlying figure 3.2 has a slope coeffcient of minus 3.3 (with a Newey-West standard error of 0.79), whereas the theoretically correct coefficient is 1.0. Surprisingly, the best fitting regression for figure 3.3 has a slope of 1.04 (standard error 0.22)—so, of course, we cannot reject b = 1. But a glance at figure 3.3 reminds us that the interest rate differential has a terrible forecasting record. This finding is not special to the dollar/yen exchange rate or to the time horizons I have selected. Shusil Wadhwani (1999), while a member of the Bank of England's Monetary Policy Committee, called attention to the failure of uncovered interest parity by running regressions similar to (3.5) for several different exchange rates over a one-year horizon. Not only are his estimates of b not equal to (p.86) Figure 3.2 Predicted vs. Actual Exchange Rate Changes over Three Months (p.87) Figure 3.3 Predicted vs. Actual Exchange Rate Changes over Ten Years (p.88) +1.0, they are actually all negative.33 Never mind accuracy. This means that interest rate differentials actually would have pointed you in the wrong direction, just as indicated by figure 3.2. So once again we have a major intellectual puzzle: A theory with seemingly impeccable logical credentials fails miserably in empirical tests. Interest rate differentials turn out to be horrible forecasters of changes in exchange rates. In fact, random walk models—which simply assume that exchange rates will never change—make better forecasts. Economists have been aware of this annoying fact ever since Meese and Rogoff's (1983) important paper. But they have yet to explain it. So let me speculate about reasons. Once again, short time horizons and extrapolative behavior by traders probably play major roles, in stark contrast to the assumptions of rational expectations models. For example, more than twenty years ago, studies of the profitability of trading on exchange rate forecasts from three different sources—chartists, services based on “fundamentals,” and forward exchange rates—found that the chartists' forecasts did best and the fundamentals-based services did worst.34 I must say that this corresponds to my own casual observations of markets—they appear to be extrapolative in the short run. Note that since chartists base their predictions solely on recent price movements, any evidence of profitable trading based on chartist analysis represents a clear refutation of market efficiency—just as momentum trading does for stocks. It is, furthermore, fascinating to note that the failure of uncovered interest parity is much more spectacular at short time horizons—where trading based on extrapolative expectations may dominate—than it is at long time horizons. This finding is consistent with our two graphs, which showed much worse performance of uncovered interest parity over three months than over ten years. It is also consistent with the scholarly literature.35 (p.89) From a central banker's point of view, the routine violation of uncovered interest parity at short horizons creates a serious problem. Suppose you are the Fed, thinking about cutting interest rates (rd in equation [3.4]) and wondering what impact this action will have on the economy. One of the standard channels of monetary transmission is, of course, through the exchange rate. So one of the things you are wondering about is how your action will affect the value of the dollar. If you believe that the other major central banks will not match your rate cut, equation (3.4) says that reducing the federal funds rate will make the dollar fall first in order to create expectations of a subsequent appreciation. A cheaper dollar should help U.S. exports and discourage U.S. imports, thereby boosting aggregate demand and giving your monetary policy an assist. But as a well-educated central banker, you are also aware that exchange rate forecasts generated by uncovered interest parity are terribly inaccurate. So you have little confidence that any of this will actually happen. How, then, do you reckon the exchange rate channel into your calculations? In practice, this intellectual puzzle seems to have deepened in recent years. The new conventional wisdom is that boosting the economy by cutting interest rates may actually make the currency appreciate immediately, presumably because a stronger macroeconomy improves prospects for (mostly financial) investment and attracts larger capital inflows. Notice that this new “model” of how the exchange rate reacts to monetary policy is precisely the opposite of what the textbooks teach.36 I was brought up to believe that easier money made your currency depreciate. (Of course, in those days we also walked barefoot over glass to get to school—uphill in both directions.) Nowadays, what I used to deride as the “macho theory of exchange rates”—the idea that the exchange rate reflects the nation's virility—is looking better and better. I hasten to say that the macho theory is now the market's belief (p.90) only for major countries like the United States, Europe, and Japan. No one to my knowledge has suggested it for Argentina or Turkey. Yet recent evidence questions even the standard view that emerging-market countries can defend their exchange rates by jacking up interest rates. They may just kill their economies instead.37 # The Special Case of Foreign Exchange Intervention Up to now, I have been speaking about central bank operations that are designed to change interest rates—which, of course, is what we generally mean by monetary policy. Exchange rate movements were only a by-product, although perhaps an important one. But sometimes a central bank—either on its own authority or under orders from the Treasury or Finance Ministry—intervenes in the foreign exchange market with the expressed intent of changing the (otherwise floating) exchange rate without changing the domestic interest rate.38 Economists call such operations sterilized interventions because they insulate domestic monetary conditions from exchange rate intervention.39 The question is, Can sterilized foreign exchange interventions work? To someone who has not studied economics beyond the 101 level, the answer may seem self-evident. If the Fed enters the market to sell dollars, the dollar falls. If it buys dollars, the dollar rises. Right? Well, maybe not. Remember equation (3.4) again. In a sterilized intervention, rd does not change—under the assumption that domestic and foreign assets are perfect substitutes. Presumably, rf doesn't either. In that case, xe should not change, and so neither should the exchange rate.40 To readers unfamiliar with such arguments, this may seem a bit like pulling a rabbit out of a hat. So here is an analogy for noneconomists. Suppose Coke and Pepsi are perfect substitutes in the eyes of (p.91) consumers. That means that, at equal prices, they do not care which one they drink—but if the price of either soft drink rises by even a penny, customers will move en masse to the other drink. Now suppose the government tries to drive up the price of Pepsi (and drive down the price of Coke) by offering to buy Pepsi and sell Coke. Can this intervention possibly succeed? Not under the hypothesis of perfect substitutability, for then even the slightest price advantage for Coke will give virtually the entire market to Coca-Cola. So the two prices cannot differ, except fleetingly. If the government sells Coke to buy Pepsi, private parties will do just the reverse. The prices must remain equal. Perfect substitutability works in the same way to negate the effects of a sterilized foreign currency intervention. The central bank, for example, buys domestic Treasury bills and sells foreign Treasury bills of equal market value. In principle, that should lower the domestic interest rate and raise the foreign interest rate. But if foreign and domestic bills are perfect substitutes in the eyes of investors, private investors will willingly sell as much of the domestic issue as the government offers to buy and buy as much of the foreign issue as the government offers to sell—all with virtually no change in any price. Neither rd nor rf nor xe need move. So the issue comes down to one of substitutability in the portfolios of private asset holders.41 If, say, U.S. and Japanese government debt instruments are perfect substitutes, then sterilized intervention cannot move the dollar/yen exchange rate at all. If they are very strong but not quite perfect substitutes, it cannot move the exchange rate much. But if the two types of bonds are rather imperfect substitutes, there is real scope for exchange rate intervention to work. Which theoretical case is most relevant to the real world? In truth, most economists are skeptical that sterilized interventions either should work in principle or do work in practice. Some (not (p.92) including me) elevate this belief to a quasi-religious status. Substitutability is almost perfect, they insist. And a central bank's foreign exchange portfolio is so small relative to even the daily volume of foreign exchange transactions that it can do little more than spit in the ocean. Take these two arguments in turn, starting with perfect substitutability. I do not believe that, say, Bill Gates is indifferent to whether his portfolio includes$10 billion of U.S. Treasury bills or $10 billion worth of Japanese government bills. Even if the expected returns are identical (as uncovered interest parity suggests), the risk characteristics are quite different. But the point about magnitudes is even more apposite, I think, and it may go a long way toward explaining why official interventions seem to have such small effects. Rarely do central banks intervene on the scale that would be necessary to move what have become truly enormous markets in the major currencies: the dollar, the yen, and the euro. With typical interventions of$1 or $2 billion in a foreign exchange market that routinely handles over a trillion dollars each day, it is hardly surprising that central bank operations do not move markets very much. In what market does a 0.1 percent change in supply move the price notably? In fact, the amazing thing may be that such small interventions move markets at all. A decade or so ago, the weight of the academic evidence clearly held that they did not—that sterilized intervention was ineffective.42 But studies of the 1970s and 1980s may have been hampered by the lack of detailed data on the exact timing and magnitude of interventions. More recent studies, published in the 1990s, find more evidence that intervention works, at least somewhat.43 To be sure, it would be foolish for any modern central bank in a country with a floating exchange rate to believe that it has tight control over its currency value—whether via sterilized intervention or otherwise. Market folk wisdom holds, variously, that you (p.93) shouldn't stand in front of a speeding freight train or try to catch a falling knife. (But it also holds, Don't fight the Fed. Oh, well. Who ever said that markets were consistent?) No sensible person believes that small-scale foreign exchange interventions can reverse the direction of a big market that is hell-bent on moving in a particular direction. That is, indeed, tantamount to standing in front of a speeding freight train. But market participants do not always hold strong convictions about which way the exchange rate should go. After a big run-up, for example, traders may become nervous that the dollar (or the yen or the euro) is overbought and is therefore due for a “correction.” Under such circumstances, a loud, clear intervention by the authorities, especially if concerted and sprung as a surprise, may succeed in pushing the market around without committing terribly much money. On those rare occasions when markets are not united in their view of the direction in which the exchange rate should be going, but governments are (and they show it), official intervention may be able to influence exchange rates substantially.44 The Plaza Accord in 1985 may have been one such example. Robert Rubin's successful turnaround of the dollar in 1995 may have been another. My second obvious point is that not all markets are very deep. A$1-billion intervention in dollar/yen may, under most circumstances, be a futile gesture. But if $1 billion is applied to the cross-rate between, say, the British pound and the Czech crown, it may look like the Czech authorities have brought out the heavy artillery. I am concerned that too many governments of (economically) small countries may be afraid of adopting a floating exchange rate in part because they think that any subsequent intervention efforts are doomed to failure. That may not be so. No one will expect them to be able to move the dollar/yen exchange rate. To return to our main theme, a modern central bank might well question how much wisdom is embodied in foreign exchange (p.94) markets that can't even get uncovered interest parity right as an ex ante relationship—that is, as a way to forecast exchange rate movements. Nor need the central banks of the world feel tightly constrained by uncovered interest parity in an ex post sense. Puzzling as it may be, exchange rates and interest rates seem to lead separate lives. # Summing Up I can review, encapsulate, and perhaps integrate the main messages of this chapter by pointing out the shortcomings of a straw man. Imagine a central bank so enraptured by modern thinking that it dutifully follows the signals emitted by the financial markets. This hypothetical (and, you might say, wimpy!) central bank would read what the markets expect it to do from the term structure of interest rates, from prices observed in the federal funds futures market and, perhaps, from the foreign exchange markets. Then it would deliver precisely that policy. Monetary policy decisions would effectively be privatized. What's wrong with such a system? Many things. To start in a strictly rational expectations framework, following the markets in this way can lead to a kind of “dog chasing its tail” phenomenon that may not have a well-defined equilibrium. At the very least, it is likely to produce excessively volatile monetary policy and therefore excessively volatile markets. This is perhaps the most fundamental criticism of the strategy of following the markets. Now allow a few not-so-rational, but probably quite realistic, elements to creep into the story. A central bank that tries too hard to please currency and bond traders may wind up adopting the market's ludicrously short effective time horizons as its own—thereby succumbing to the very danger that central bank independence was supposed to guard against. (p.95) And then there are those allegedly invaluable signals from the all-knowing financial markets. According to the predominant economic theory, the term structure embodies the best possible forecasts of future short rates and thus the best possible forecasts of what the market thinks the central bank will (or is it should?) do. In practice, however, forecasts of future short rates derived from the term structure prove to be wide of the mark—perhaps because of myopia, or perhaps for other reasons. At the very least, the collective wisdom that is supposedly embodied in the term structure appears to be greatly overrated. A market-friendly central bank is also informed, and supposedly constrained, by uncovered interest parity, which links short-term interest rates tightly to near-term exchange rate expectations. Look up the expected dollar/ euro exchange rate and the European interest rate, and the market will tell you what the corresponding U.S. interest rate should be. But once again, this source of market wisdom fails the empirical test. Its implied exchange rate forecasts err badly. Odd as it may seem, interest rate differentials and exchange rates frequently go their own ways. The upshot of all this is that it may not be wise for a central bank to take its marching orders from the markets. But that does not mean that modern central bankers should emulate King Canute and pretend they can command the markets. Plainly, they cannot. Rather, an astute central banker nowadays should view the markets as a powerful herd that is sometimes right, sometimes wrong, always a force to be reckoned with, but sometimes manipulable. Most fundamentally, the markets need to be led, not followed. For a central bank to be the leader, it must set out on a sensible and comprehensible course—or else the putative followers may refuse to fall in line. Furthermore, being transparent about its goals and its methods should help the central bank assume this leadership role by teaching the markets where and how it wants to lead them. (p.96) We have thus come full circle. The greater transparency explained and extolled in chapter 1 should, it appears, help put the monetary authorities in the position of leader, rather than follower. It's a nice symbiosis—or, as Shakespeare once put it, “a consummation devoutly to be wished?” # Appendix to Chapter 3: The Expectations Theory of the Term Structure Start with the two-period example mentioned in the text. Arbitrage implies that, ignoring possible risk or liquidity premiums, the two-period interest factor must be the product of today's one-period interest factor and the one-period interest factor expected to prevail one period from now: (3.6) $Display mathematics$ Here ri is the i-period interest rate (expressed at an annual rate), t measures time, and the superscript e indicates an expectation. Proceeding similarly, the three-period interest factor should be the product of today's one-period interest factor and the next two expected one-period factors: $Display mathematics$ and so on—except for possible term premiums—for longer maturities. By (3.6), this last expression can be written more compactly as (3.7) $Display mathematics$ which relates the current two-period and three-period interest rates. Because both r2,t and r3,t are observable, equations like (3.7) can be used to deduce the implied forward rates mentioned in the text, although the actual calculations are bedeviled by the question of (p.97) how to handle term premiums. For example, ignoring any term premiums: $Display mathematics$ In words, the one-period interest factor expected to prevail two periods from now is the ratio of the three-period interest factor divided by the two-period factor. The corresponding equation for nine- and ten-year bonds is (3.8) $Display mathematics$ If we take logs of (3.8) and use the standard approximation log(1+r)≈r, we get equation (3.3) in the text. The corresponding linear regression alluded to in the text is $Display mathematics$ and, as reported, estimating this equation leads to a resounding rejection of the null hypothesis b = 1. ## Notes: (1) . See, among others, Cukierman (1992), Debelle and Fischer (1995), and McCallum (1997). (2) . See, for example, Fischer (1994) or Eijffinger and De Haan (1996). (3) . See Posen (1993) or Campillo and Miron (1997). (4) . I first raised this danger in Blinder (1995). (5) . For example, a few observers have gone so far as to claim that central banks should simply let markets determine interest rates. See, for example, Ely (1998). Fortunately, this is not the dominant view. (6) . On herding, see, for example, Banerjee (1992) and Scharfstein and Stein (1990). On overreaction, see Shiller (1979, 1981) and Gilles and LeRoy (1991). (7) . I have tried several times to track this quotation down. Several of Black's friends remember hearing him say it, but none have been able to point me to a published source. (8) . Bikhchandani and Sharma (2000) is a useful survey which helped inform the next few paragraphs. (9) . Bikhchandani and Sharma (2000) cite five scholarly papers dated between 1995 and 1999. (10) . Most empirical studies seem to use the measure devised by Lakonishok, Shleifer, and Vishny (1992). (11) . Regarding fads, see Shiller (1984, 2000). Regarding bubbles, see Flood and Garber (1980) and West (1987). Garber (2000) reminds us that we should not be too quick to declare a bubble. (12) . For example, in Blinder (1998), 61. (13) . This statement assumes risk neutrality. (14) . One possibility (which I owe to Christopher Sims): If the short-term interest rate literally follows a first-order autoregressive process (so that only its own lagged value matters), an interest-rate shock today will move the expectations of all future short rates, making the short rate and the implied forward rates perfectly correlated. But the correlation drops away from 1.0 as more lags and/or more variables are added. (15) . The thorough Fed staff calculated this correlation for several earlier years as well. Sometimes it was higher than 0.54, sometimes lower. (16) . The word “roughly” refers to the fact that the approximation log(1+x) ≈ xis used. See the appendix to this chapter. (17) . This equality ignores any possible risk or liquidity premiums, which are mentioned below. (18) . The yield curve is a graph relating the rate of interest to the maturity of the instrument. (19) . This account leaves out the aforementioned term premium, which is what makes such exercises complicated. (20) . Two useful references are Shiller (1990) and Campbell (1995). (21) . The data come from McCulloch and Kwon (1993). (22) . The line is not forced to go through the origin to allow for different term premiums on nine-year and ten-year bonds. (23) . The estimate of the standard error uses the Newey-West correction. (24) . If that sounds complicated, see Campbell (1995) for an explanation. (25) . A plot similar to figure 3.1 for three-month and six-month Treasury bill rates using daily data from January 1982 through November 2001 (not shown here) looks much better. It appears that the expectations theory works better at the very short end of the yield curve. (26) . Chow's (1989) long rate was twenty years; his short rate was one month; and his sample was monthly U.S. data from 1959 to 1983. (27) . The long-run asymptotic effect is 97 basis points, insignificantly different from 100. (28) . In this case, the long-run asymptotic effect is 106 basis points. (29) . There is a bright side, however. If myopia leads the markets to overreact to the central bank's decisions, the power and speed of monetary policy will thereby be enhanced. (30) . Purists will note that E(1/X) is not equal to 1/E(X), which is one reason for the word “approximately.” But it has never been clear—at least to me—what to make of this Jensen's inequality problem, for while the American investor presumably cares about E(1/X), the German investor presumably cares about E(X). (31) . Once again, there are potential complications owing to such things as liquidity premia. These are relevant to levels but should mostly wash out when we deal with changes. (32) . Over durations and currencies for which forward markets exist, there is a version of (3.4) called covered interest rate parity. Unlike its uncovered brother, covered interest rate parity must hold because people can actually carry out all the necessary transactions to ensure that the arbitrage relation holds. (33) . His regressions pertain to the following exchange rates: pound/DM, dollar/pound, pound/French franc, pound/yen, dollar/DM,$/yen, and DM/yen. They all end in December 1998, and they begin at various dates from January 1976 to October 1978.
(34) . See Goodman (1997). See also Taylor (1997), which is a summary of a special issue of the International Journal of Finance and Economics devoted to technical analysis.
(35) . See, for example, Meredith and Chinn (1998).
(36) . Meredith (2001) and Alquist and Chinn (2002) both emphasize the roles of productivity and profitability in attracting foreign capital. But there is still a leap to connect capital inflows to expansionary monetary policy.
(37) . See Kenen (2001), 55–56.
(38) . Interest rate parity, whether covered or uncovered, reminds us that such an operation may have implications for foreign interest rates.
(39) . By contrast, an unsterilized intervention (e.g., buying a foreign bond and paying for it with newly created high-powered money) should move both the exchange rate and the domestic interest rate.
(40) . This leaves out the logical possibility that today's exchange rate and tomorrow's expected exchange move up or down in proportion, leaving xe unchanged. But in that case, one wonders what will ultimately happen to the country's current account balance.
(41) . There are other mechanisms via which economists have sometimes argued that sterilized interventions might work—for example, if forex operations signal future changes in (domestic) monetary policy. This mechanism muddies the waters, in my view, because it argues that sterilized interventions create expectations of future unsterilized interventions.
(42) . See, for example, Edison (1993).
(43) . See Dominguez and Frankel (1993) and, especially, the recent survey by Sarno and Taylor (2001).
(44) . In support of this view, Peter Kenen (1988) found that interventions in the European Monetary System tended to be most effective when market expectations (measured by survey data) were most disperse. (p.108) | 2021-04-15 02:51:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 11, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45177632570266724, "perplexity": 1723.2493996901367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038082988.39/warc/CC-MAIN-20210415005811-20210415035811-00573.warc.gz"} |
http://mathmistakes.org/trig-identities-2/ | In the comments: can you reconstruct each step of the student’s argument? What was he thinking?
Thanks to Kristen Fouss for the submission.
[We have a growing library of trig identities, and I’ve got like 10 more trig identities to post over the next few weeks. Seems to be a tough spot for students.]
• That’s not an identity problem, that’s fractions. He seems pretty solid on the identities.
• mpershan
Maybe I’m missing something, but I think there are some trig issues as well. The student’s work claims that $cot^{2}(x) + csc^{2}(x) = 1$ (in the move from the 1st line in pencil to the 2nd line in pencil).
• Oops. Missed that. You’re right. I was focusing on the fraction errors, which are dismal considering it’s a trig student.
• But trig is fractions – it’s why we are all over the elementary teachers who tell our kids they don’t need to know how to manipulate fractions. Can’t use a calculator for algebra 2 (can use Wolfram Alpha, though).
• Um, yeah. I know trig is fractions.I also know that if you have a kid who can’t master fractions, teaching trig is amazingly pointless. So I was focusing on the fraction errors.
• Eric Fleming
The student was thinking…
Step 1: Get a common Denominator.
Step 2: Gross, my numerator is a mess. Is there an identity to substitute to make it friendlier?
Step 3: Okay, I see csc(x) in the numerator and denominator, CANCEL!! Ooo, I can add those now.
Step 4: Factor of 2’s cancel.
Step 5: Dividing by cot(x) is silly, reciprocal of the reciprocal of tan(x), keep the 2 in front.
• It’s Friday, I’m up for the LaTex lesson… Mike, if it’s wrong, please rescue me!
I’m old school, I only do $\sin \left( x\right)$ and $\cos \left( x\right)$
$\dfrac {\dfrac {\cos \left( x\right) } {sin\left( x\right) }} {1+\dfrac {1} {sin\left( x\right) }}+\dfrac {1+\dfrac {1} {\sin \left( x\right) }} {\dfrac {cos\left( x\right) } {sin\left( x\right) }}$ assuming the Latex is right.
So then I put $\dfrac {\sin \left( x\right) } {\sin \left( x\right) } =1$ , thereby avoiding using the word canceling.
Now $\dfrac {\cos \left( x \right) } {sin\left( x\right) +1}+\dfrac {sin\left( x\right) +1} {\cos \left( x\right) }$
Multiplying to a common denominator, $\dfrac {\cos ^{2}\left( x\right) +\sin ^{2}\left( x\right) +2\sin \left( x\right) +1} {\cos\left( x\right) \left( \sin \left( x\right) +1\right) }$
Now, look here, the $\cos^{2}\left( x\right) + \sin^{2}\left( x\right)$ gives us 1. So I wonder whether this little friend saw someone else’s work and copied badly.
After that it was just fractions.
$\dfrac {2+2\sin \left( x\right) } {\cos \left( x\right) \left( sin\left( x\right) +1\right) }$
and then easily to $\dfrac {2} {\cos \left( x\right) }=2sec\left( x\right)$
Latex is hard. http://www.quickmeme.com/meme/35945v/
• The student knows that the trig identity must reduce to something simple (because that’s the type of problems that are always set). You can see all of the erased work where they tried other approaches and they only stopped in this last attempt since they got something simple – despite or because of the mistakes.
The mistakes and thinking have been well covered by the first three commenters, so here’s my version of a solution using Weierstrass substitution.
Set $t = \tan(x/2)$ with $-\frac{\pi}{2},
so $\sin(x)=\frac{2t}{1+t^2}$, $\cos(x)=\frac{1-t^2}{1+t^2}$.
Substitute and simplify:
$\frac{\cot(x)}{\csc(x)+1}+\frac{\csc(x)+1}{\cot(x)} = \frac{1-t}{1+t}+\frac{1+t}{1-t} = 2\frac{1+t^2}{1-t^2} = 2\sec(x)$ | 2017-02-21 07:30:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.872463047504425, "perplexity": 1484.2351569597736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00316-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://mathhelpforum.com/discrete-math/217919-set-problems-print.html | Set Problems
Show 40 post(s) from this thread on one page
Page 1 of 2 12 Last
• April 21st 2013, 09:34 AM
xmathlover
Set Problems
Using ordered-pair notation, let g be the function defined by
g = {(5n, 3n) | n element of Z}
a) State the domain and a reasonable codomain of g.
b) Evaluate g(10), g(g(50)), and g^-1(-6).
Isn't the domain all real numbers?
How do I go about evaluating and finding the domain and codomain
• April 21st 2013, 09:44 AM
Plato
Re: Set Problems
Quote:
Originally Posted by xmathlover
Using ordered-pair notation, let g be the function dened by
g = f(5n, 3n) | n element of Z
a) State the domain and a reasonable codomain of g.
b) Evaluate g(10), g(g(50)), and g^-1(-6).
Isn't the domain all real numbers?
How do I go about evaluating and finding the domain and codomain
What you have posted leaves a great many questions.
What in the world is the definition of $f(5n,3n)~?$
Do you mean $g(n)=f(5n,3n)~?$.
Where do ordered pairs come in to it at all?
Please repost a meaningful statement.
• April 21st 2013, 09:46 AM
xmathlover
Re: Set Problems
I fixed it.
{5n, 3n | n elements of Z}
• April 21st 2013, 09:54 AM
Plato
Re: Set Problems
Quote:
Originally Posted by xmathlover
I fixed it.
{5n, 3n | n elements of Z}
If $g=\{(5n,3n):n\in\mathbb{Z}\}$ then the domain of $g$ is the set of multiples of five.
The image set (final set or codomain) is the set of multiples of three.
EX: $g(10)=g(5\cdot 2)=3\cdot 2)=6$
• April 21st 2013, 10:03 AM
xmathlover
Re: Set Problems
How did you come to that conclusion? do you look at the ordered pairs as x input, y output?
How would I evaluate g(g(50)) and g^-1(-6)?
g(50) = g(5*10) = 3 * 10 = 30, now how do I run it through the outer g?
• April 21st 2013, 10:33 AM
Plato
Re: Set Problems
Quote:
Originally Posted by xmathlover
How did you come to that conclusion? do you look at the ordered pairs as x input, y output?
How would I evaluate g(g(50)) and g^-1(-6)?
g(50) = g(5*10) = 3 * 10 = 30, now how do I run it through the outer g?
$g(30)=g(5\cdot 6)=3\cdot 6=18$ and $-6=3\cdot ?$ so ?
• April 21st 2013, 10:37 AM
xmathlover
Re: Set Problems
what do you do with the g^-1?
• April 21st 2013, 10:44 AM
Plato
Re: Set Problems
Quote:
Originally Posted by xmathlover
what do you do with the g^-1?
From your questions, it seems that you need a good review of function notation.
$g(g^{-1}(t))=t$. That is if $g^{-1}(t)=s$ then $g(s)=t$
Thus $g^{-1}(-21)=-35$, now can you explain why?
• April 21st 2013, 10:57 AM
xmathlover
Re: Set Problems
you lost me! lol...
• April 21st 2013, 11:03 AM
Plato
Re: Set Problems
Quote:
Originally Posted by xmathlover
you lost me! lol...
I really don't think that you are ready to do these questions.
Why not give your notes and/or textbook a good review?
Asking for $g^{-1}(-21)$ is the same as asking $g(?)=-21$, fill in the blank?
• April 21st 2013, 11:08 AM
xmathlover
Re: Set Problems
for this set {5n, 3n}?
g(7) = -21
wouldn't g^-1(-6) = 2?
• April 21st 2013, 11:11 AM
Plato
Re: Set Problems
Quote:
Originally Posted by xmathlover
for this set {5n, 3n}?
g(7) = -21
No. $g(7)$ does not exist, because seven is not in the domain of $g$. WHY?
• April 21st 2013, 11:13 AM
xmathlover
Re: Set Problems
because it isn't a multiple or 5 or 3?
But by that assumption nothing will work.
• April 21st 2013, 12:05 PM
Plato
Re: Set Problems
Quote:
Originally Posted by xmathlover
because it isn't a multiple or 5 or 3?
But by that assumption nothing will work.
See, as I told you: You do not understand the notation.
$g^{-1}(-21)=-35$. It does work: $g(-35)=-21$.
• April 21st 2013, 12:34 PM
xmathlover
Re: Set Problems
oh so then, g^1(-6) = -10
Is the domain of g express liked this {x | x +- 5} or should it be expressed different? My notes suck...they don't explain anything about domain and codomain in orderpair notation
Show 40 post(s) from this thread on one page
Page 1 of 2 12 Last | 2015-08-31 23:04:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8491806983947754, "perplexity": 4363.229291959696}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068098.37/warc/CC-MAIN-20150827025428-00066-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://nrich.maths.org/76 | ### Polydron
This activity investigates how you might make squares and pentominoes from Polydron.
### Multilink Cubes
If you had 36 cubes, what different cuboids could you make?
### Cereal Packets
How can you put five cereal packets together to make different shapes if you must put them face-to-face?
# It Figures
##### Stage: 2 Challenge Level:
I am writing this on St. Valentine's Day (Feb 14th), and we're half way through the month.
So we're one-and-a-half months into the new year.
Well, I know that $1 \frac{1}{2}$ is $\frac{1}{2}$ of $3$, which is $\frac{1}{2}$ of $6$, and that's $\frac{1}{2}$ of $12$.
So what we have is $\frac{1}{2}$ of $\frac{1}{2}$ of $\frac{1}{2}$ of $12$ months.
Now that's $\frac{1}{8}$ of a year!
We can put that down as: $\frac{1}{2}\times \frac{1}{2}\times\frac{1}{2} = \frac{1}{8}$
As the days go by, so the fractions change.
Back in the middle of January we were $\frac{1}{2}$ of $\frac{1}{3}$ of $\frac{1}{4}$ through the year!
$\frac{1}{2}\times \frac{1}{3} \times \frac{1}{4} = \frac{1}{24}$
At the end of February a half of a third through the year:
$\frac{1}{2}$ of $\frac{1}{3}= \frac{1}{6}$
Well, let's explore the interesting numbers we get.
Suppose we allow ourselves to use three numbers less than $10$ and multiply them together [like I used $2$, $2$, $2$, for the middle of February and $2$, $3$, $4$, for the middle of January].
Then we could get things like :-
$2 \times 3 \times 5 = 30$
$3 \times 5 \times 8 = 120$
$6 \times 6 \times 6 = 216$
and so on ...
See if you can find lots and lots.
You could then explore these numbers in different ways.
A usual thing is to put them in order from the lowest to the highest.
You could look at the answers that come up more than once. Like $36$ which comes up three times from doing :-
$2 \times 3 \times 6$
$2 \times 2 \times 9$
$3 \times 3 \times 4$
When you find things out from these answers that you get you should ask yourself WHY??
Now we always should ask ourselves "I wonder what would happen if I ...?".
So, "I wonder what would happen if we only allowed ourselves to use two numbers below $10$ to multiply together?"
Now we will get answers like :-
$7 \times 2 = 14$
$3 \times 8 = 24$
$5 \times 9 = 45$
Now if you find $31$ different answers I thought you will have found them all!
However thanks to Mr A Thompson from Chestnutgrove School Academy in Wandsworth and his hard working Yr 7 students , I stand corrected and the number is $46$!!
Like before, we could write them out in order and explore them. Don't forget to look at those answers that occur more than once like $18$ which comes from :-
$2 \times 9$
$3 \times 6$
Well some of you will now be aware that there are $120$ answers altogether from multiplying three figures together. [I wonder how you know that? Was it just by counting? Did you do some interesting additions? Did you ...?]
We're counting the ones that maybe have the same answer but have come from different multiplications.
When you only have two figures to multiply together then there are $36$ answers.
How have you done?
Can you find the rest?
So what on earth happens if we use four or even more figures to multiply together?
Or maybe you'd like to use numbers higher than $9$ [perhaps up to $12$] allowing you, when using three numbers altogether to have:
$2 \times 5 \times 12 = 120$
$11 \times 11 \times11 = 1331$ | 2016-09-28 23:55:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5823011994361877, "perplexity": 988.0512602806534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661768.23/warc/CC-MAIN-20160924173741-00261-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/96501/what-does-relax-do | What does \relax do? [duplicate]
The title suggest the question. On and off, I see macros here in TeX.SE and I see the \relax command frequently.
1. I was wondering what it does and where/when should I used it?
2. Are there any precautions that I should take while using it? (Are there side-effects?)
It would be great if you support your answers with a simple example.
marked as duplicate by Martin Schröder, barbara beeton, Mensch, Stefan Kottwitz♦Feb 2 '13 at 20:56
• Short answer: it does nothing, but is not expandable. – egreg Feb 2 '13 at 18:59
• It's often used after some "fragile" commands and in \if constructions. – Eddy_Em Feb 2 '13 at 19:01
• What you often see on this site, is to make TeX stop collecting for an argument of specific type (very very roughly speaking). Suppose your macro is looking for some number in the argument \mymacro12345\relax 678 would make it stop at five and then do whatever else it's supposed to do. There are many more usecases but that's pretty much the main reason of its appearance in the answers here. – percusse Feb 2 '13 at 19:05
• – David Carlisle Feb 2 '13 at 19:08
Although \relax does nothing by itself, it is a safe command to stop expansion of another command. Some examples:
• (plain tex) \hskip 5pt\relax -- in the absence of \relax, the \hskip will keep looking for plus or minus
• (latex) at the end of a line, \\ \relax [...] will prevent what is in brackets from being interpreted as a dimension that would add vertical space
(actually, this is pretty well explained by answers to this question.)
• @Τίμων Of course your suggested edit of the capitalization is technically correct, but in this special case, the missing capitalization actually carries information, see e.g. tug.org/interviews/beeton.html, so an edit would change the meaning of the answer. – user36296 Aug 8 '16 at 13:51
It is what's called a no-op: It does nothing, and it's used in various places where you don't want anything done, but the syntax requires something. TeX's rules also dictate that in an \if statement, an undefined macro will compare equal to \relax. So it's sort of a general-purpose nothing.
(The empty brace group {} is another kind of nothing, as the question linked to by David Carlisle illustrates). | 2019-09-18 09:04:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8560473918914795, "perplexity": 1791.4989284372411}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573264.27/warc/CC-MAIN-20190918085827-20190918111827-00403.warc.gz"} |
https://escholarship.org/uc/item/04g5p8ph | Skip to main content
Open Access Publications from the University of California
## Toward the Systematic Design of Complex Materials from Structural Motifs
Abstract
With first-principles calculations based on density functional theory, we can predict with good accuracy the electronic ground state properties of a fixed arrangement of nuclei in a molecule or crystal. However, the potential of this formalism and approach is not fully utilized; most calculations are performed on experimentally determined structures and stoichiometric substitutions of those systems.
This in part stems from the difficulty of systematically generating 3D geometries that are chemically valid under the complex interactions existing in materials. Designing materials is a bottleneck for computational materials exploration; there is a need for systematic design tools that can keep up with our calculation capacity. Identifying a higher level language to articulate designs at the atomic scale rather than simply points in 3D space can aid in developing these tools.
Constituent atoms of materials tend to arrange in recognizable patterns with defined symmetry such as coordination polyhedra in transition metal oxides or subgroups of organic molecules; we call these structural motifs. In this thesis, we advance a variety of systematic strategies for understanding complex materials from structural motifs on the atomic scale with an eye towards future design.
In collaboration with experiment, we introduce the harmonic honeycomb iridates with frustrated, spin-anisotropic magnetism. At the atomic level, the harmonic honeycomb iridates have identical local geometry where each iridium atom octahedrally coordinated by oxygen hosts a $J_{eff}=1/2$ spin state that experiences interactions in orthogonal spin directions from three neighboring iridium atoms. A homologous series of harmonic honeycomb can be constructed by changing the connectivity of their basic structural units.
Also in collaboration with experiment, we investigate the metal-organic chalcogenide assembly [AgSePh]$_\infty$ that hosts 2D physics in a bulk 3D crystal. In this material, inorganic AgSe layers are scaffolded by organic phenyl ligands preventing the inorganic layers from strongly interacting. While bulk Ag$_2$Se is an indirect band gap semiconductor, [AgSePh]$_\infty$ has a direct band gap and photoluminesces blue. We propose that these hybrid systems present a promising alternative approach to exploring and controlling low-dimensional physics due to their ease of synthesis and robustness to the ambient environment, contrasting sharply with the difficulty of isolating and maintaining traditional low-dimensional materials such as graphene and MoS$_2$.
Automated density functional theory via high throughput approaches are a promising means of identifying new materials with a given property. We automate a search for ferroelectric materials by integrating density functional theory calculations, crystal structure databases, symmetry tools, workflow software, and a custom analysis toolkit. Structural distortions that occur in the structural motifs of ferroelectrics give rise to a switchable spontaneous polarization. In ferroelectrics lattice, spin, and electronic degrees of freedom couple leading to exotic physical phenomena and making them technologically useful (e.g. non-volatile RAM).
We also propose a new neural network architecture that encodes the symmetries of 3D Euclidean space for learning the structural motifs of atomic systems. We describe how these networks can be used to speed up important components of the computational materials discovery pipeline and generate hypothetical stable atomic structures.
Finally, we conclude with a discussion of the materials design tools deep learning may enable and how these tools could be guided by the intuition of materials scientists.
Main Content
For improved accessibility of PDF content, download the file to your device. | 2022-09-25 08:56:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6023709177970886, "perplexity": 1743.3274939249184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00729.warc.gz"} |
https://gmatclub.com/forum/right-triangle-abc-is-to-be-drawn-in-the-xy-plane-so-that-88958.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 24 May 2019, 03:12
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
Your Progress
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Right triangle ABC is to be drawn in the xy-plane so that
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Intern
Joined: 18 Jul 2009
Posts: 37
Right triangle ABC is to be drawn in the xy-plane so that [#permalink]
### Show Tags
Updated on: 16 Jul 2013, 00:17
10
107
00:00
Difficulty:
(N/A)
Question Stats:
59% (02:20) correct 41% (02:54) wrong based on 802 sessions
### HideShow timer Statistics
Right triangle ABC is to be drawn in the xy-plane so that the right angle is at A and AB is parallel to the y-axis. If the x- and y-coordinates of A, B, and C are to be integers that are consistent with the inequalities -6 ≤ x ≤ 2 and 4 ≤ y ≤ 9 , then how many different triangles can be drawn that will meet these conditions?
A. 54
B. 432
C. 2,160
D. 2,916
E. 148,824
Originally posted by hrish88 on 09 Jan 2010, 04:50.
Last edited by Bunuel on 16 Jul 2013, 00:17, edited 3 times in total.
Edited the question and added the OA
##### Most Helpful Expert Reply
Math Expert
Joined: 02 Sep 2009
Posts: 55272
Re: tough problem [#permalink]
### Show Tags
09 Jan 2010, 05:34
24
28
hrish88 wrote:
Right triangle ABC is to be drawn in the xy-plane so that the right angle is at A and AB is parallel to the y-axis. If the x- and y-coordinates of A, B, and C are to be integers that are consistent with the inequalities -6 ≤ x ≤ 2 and 4 ≤ y ≤ 9 , then how many different triangles can be drawn that will meet these conditions?
A.54
B.432
C.2160
D.2916
E.148,824
i ve got it right.but this problem is very time consuming.can anyone suggest shorter method
We have the rectangle with dimensions 9*6 (9 horizontal dots and 6 vertical). AB is parallel to y-axis and AC is parallel to x-axis.
Choose the (x,y) coordinates for vertex A: 9C1*6C1;
Choose the x coordinate for vertex C (as y coordinate is fixed by A): 8C1, (9-1=8 as 1 horizontal dot is already occupied by A);
Choose the y coordinate for vertex B (as x coordinate is fixed by A): 5C1, (6-1=5 as 1 vertical dot is already occupied by A).
9C1*6C*8C1*5C1=2160.
Answer: C.
_________________
##### Most Helpful Community Reply
Manager
Joined: 02 Jan 2013
Posts: 56
GMAT 1: 750 Q51 V40
GPA: 3.2
WE: Consulting (Consulting)
Re: Right triangle ABC is to be drawn in the xy-plane so that [#permalink]
### Show Tags
25 Jan 2013, 13:03
12
7
Slightly different way of thinking:
On the 9x6 grid of possibilities, I can imagine a bunch of rectangles (with sides parallel to x and y axes). Each of these rectangles contains 4 triangles that fit the description of the question stem.
therefore:
Answer = ( # of Rectangles I can make on the grid) x 4
To create the rectangle, I need to pick 2 points on the x direction, and 2 points on the y direction. Therefore:
Answer = C(9,2) * C(6,2) * 4 = 36 * 15 * 4 = 2160 (OPTION C)
##### General Discussion
Intern
Joined: 18 Jul 2009
Posts: 37
Re: tough problem [#permalink]
### Show Tags
09 Jan 2010, 05:50
Bunuel wrote:
hrish88 wrote:
Right triangle ABC is to be drawn in the xy-plane so that the right angle is at A and AB is parallel to the y-axis. If the x- and y-coordinates of A, B, and C are to be integers that are consistent with the inequalities -6 ≤ x ≤ 2 and 4 ≤ y ≤ 9 , then how many different triangles can be drawn that will meet these conditions?
A.54
B.432
C.2160
D.2916
E.148,824
i ve got it right.but this problem is very time consuming.can anyone suggest shorter method
We have the square with dimensions 9*6(9 horizontal dots and 6 vertical). AB is parallel to y-axis and AC is parallel to x-axis.
Choose the (x,y) coordinates for vertex A: 9C1*6C1;
Choose the x coordinate for vertex C (as y coordinate is fixed by A): 8C1, (9-1=8 as 1 horizontal dot is already occupied by A);
Choose the y coordinate for vertex B (as x coordinate is fixed by A): 5C1, (6-1=5 as 1 vertical dot is already occupied by A).
9C1*6C*8C1*5C1=2160.
Answer: C.
OA is C.very nice explanation.you rock man as always.
Manager
Joined: 07 Aug 2010
Posts: 60
Re: tough problem [#permalink]
### Show Tags
14 Oct 2010, 22:06
Bunuel wrote:
hrish88 wrote:
Right triangle ABC is to be drawn in the xy-plane so that the right angle is at A and AB is parallel to the y-axis. If the x- and y-coordinates of A, B, and C are to be integers that are consistent with the inequalities -6 ≤ x ≤ 2 and 4 ≤ y ≤ 9 , then how many different triangles can be drawn that will meet these conditions?
A.54
B.432
C.2160
D.2916
E.148,824
i ve got it right.but this problem is very time consuming.can anyone suggest shorter method
We have the rectangle with dimensions 9*6 (9 horizontal dots and 6 vertical). AB is parallel to y-axis and AC is parallel to x-axis.
Choose the (x,y) coordinates for vertex A: 9C1*6C1;
Choose the x coordinate for vertex C (as y coordinate is fixed by A): 8C1, (9-1=8 as 1 horizontal dot is already occupied by A);
Choose the y coordinate for vertex B (as x coordinate is fixed by A): 5C1, (6-1=5 as 1 vertical dot is already occupied by A).
9C1*6C*8C1*5C1=2160.
Answer: C.
Good one. +1 for it. Hope I didn't mess it up.
so what about the triangles that look like the mirror images of the ones above? - think, switching the co-ords of A and C along x axis and switching A and B along y axis....
_________________
Click that thing - Give kudos if u like this
Math Expert
Joined: 02 Sep 2009
Posts: 55272
Re: tough problem [#permalink]
### Show Tags
15 Oct 2010, 02:58
BlitzHN wrote:
Bunuel wrote:
hrish88 wrote:
Right triangle ABC is to be drawn in the xy-plane so that the right angle is at A and AB is parallel to the y-axis. If the x- and y-coordinates of A, B, and C are to be integers that are consistent with the inequalities -6 ≤ x ≤ 2 and 4 ≤ y ≤ 9 , then how many different triangles can be drawn that will meet these conditions?
A.54
B.432
C.2160
D.2916
E.148,824
i ve got it right.but this problem is very time consuming.can anyone suggest shorter method
We have the rectangle with dimensions 9*6 (9 horizontal dots and 6 vertical). AB is parallel to y-axis and AC is parallel to x-axis.
Choose the (x,y) coordinates for vertex A: 9C1*6C1;
Choose the x coordinate for vertex C (as y coordinate is fixed by A): 8C1, (9-1=8 as 1 horizontal dot is already occupied by A);
Choose the y coordinate for vertex B (as x coordinate is fixed by A): 5C1, (6-1=5 as 1 vertical dot is already occupied by A).
9C1*6C*8C1*5C1=2160.
Answer: C.
so what about the triangles that look like the mirror images of the ones above? - think, switching the co-ords of A and C along x axis and switching A and B along y axis....
Above solution counts all position:
AC and CA;
A
B
and
B
A;
For example point C with 8C1 can be placed to the right as well to the left of A and point B with 5C1 can be placed below as well as above of A. So all cases are covered.
More here: arithmetic-og-question-88380.html?hilit=dimensions
Hope it helps.
_________________
Manager
Status: ISB, Hyderabad
Joined: 25 Jul 2010
Posts: 129
WE 1: 4 years Software Product Development
WE 2: 3 years ERP Consulting
Re: tough problem [#permalink]
### Show Tags
17 Oct 2010, 20:24
3
C.
I am not sure if this approach is correct. I used Elimination. There can be only 5 possible values of C if we fix A. So the number of triangles possible has to be multiple of 5. The only answer that satisfies the criterion is C.
_________________
Senior Manager
Joined: 13 Aug 2012
Posts: 418
Concentration: Marketing, Finance
GPA: 3.23
Re: Right triangle ABC is to be drawn in the xy-plane so that [#permalink]
### Show Tags
25 Jan 2013, 07:08
4
2
hrish88 wrote:
Right triangle ABC is to be drawn in the xy-plane so that the right angle is at A and AB is parallel to the y-axis. If the x- and y-coordinates of A, B, and C are to be integers that are consistent with the inequalities -6 ≤ x ≤ 2 and 4 ≤ y ≤ 9 , then how many different triangles can be drawn that will meet these conditions?
A. 54
B. 432
C. 2,160
D. 2,916
E. 148,824
First, get the integer points available for x-axis: 2 - (-6) + 1 = 9
Second, get the interger points available for y-axis: 9-4+1 = 6
How many ways to select the location of line AB in the x-axis? 9
How many ways to select the location of point C in the x-axis? 8 (Note: we cannot select the location of line AB)
How many ways to select the location of the base? 2 (Is it BC or AB?)
How many ways to position line AB parallel to y axis? 6!/2!4! = 15
Multiple all that:$$9*8*2*15 = 2,160$$
Answer: C
_________________
Impossible is nothing to God.
Intern
Joined: 01 Apr 2013
Posts: 16
Schools: Tepper '16 (S)
Re: tough problem [#permalink]
### Show Tags
18 May 2013, 10:49
Bunuel wrote:
hrish88 wrote:
Right triangle ABC is to be drawn in the xy-plane so that the right angle is at A and AB is parallel to the y-axis. If the x- and y-coordinates of A, B, and C are to be integers that are consistent with the inequalities -6 ≤ x ≤ 2 and 4 ≤ y ≤ 9 , then how many different triangles can be drawn that will meet these conditions?
A.54
B.432
C.2160
D.2916
E.148,824
i ve got it right.but this problem is very time consuming.can anyone suggest shorter method
We have the rectangle with dimensions 9*6 (9 horizontal dots and 6 vertical). AB is parallel to y-axis and AC is parallel to x-axis.
Choose the (x,y) coordinates for vertex A: 9C1*6C1;
Choose the x coordinate for vertex C (as y coordinate is fixed by A): 8C1, (9-1=8 as 1 horizontal dot is already occupied by A);
Choose the y coordinate for vertex B (as x coordinate is fixed by A): 5C1, (6-1=5 as 1 vertical dot is already occupied by A).
9C1*6C*8C1*5C1=2160.
Answer: C.
Hi Bunuel ,
That was a fantastic solution , but i have a small doubt . How do we ensure that by selecting points in this way the properties of a triangle are satisfied always . Could there be some points through which we cannot even form a triangle leave alone right angled triangle. I hope i am clear in my question .
Math Expert
Joined: 02 Sep 2009
Posts: 55272
Re: tough problem [#permalink]
### Show Tags
19 May 2013, 04:09
venkat18290 wrote:
Bunuel wrote:
hrish88 wrote:
Right triangle ABC is to be drawn in the xy-plane so that the right angle is at A and AB is parallel to the y-axis. If the x- and y-coordinates of A, B, and C are to be integers that are consistent with the inequalities -6 ≤ x ≤ 2 and 4 ≤ y ≤ 9 , then how many different triangles can be drawn that will meet these conditions?
A.54
B.432
C.2160
D.2916
E.148,824
i ve got it right.but this problem is very time consuming.can anyone suggest shorter method
We have the rectangle with dimensions 9*6 (9 horizontal dots and 6 vertical). AB is parallel to y-axis and AC is parallel to x-axis.
Choose the (x,y) coordinates for vertex A: 9C1*6C1;
Choose the x coordinate for vertex C (as y coordinate is fixed by A): 8C1, (9-1=8 as 1 horizontal dot is already occupied by A);
Choose the y coordinate for vertex B (as x coordinate is fixed by A): 5C1, (6-1=5 as 1 vertical dot is already occupied by A).
9C1*6C*8C1*5C1=2160.
Answer: C.
Hi Bunuel ,
That was a fantastic solution , but i have a small doubt . How do we ensure that by selecting points in this way the properties of a triangle are satisfied always . Could there be some points through which we cannot even form a triangle leave alone right angled triangle. I hope i am clear in my question .
ANY 3 non-collinear points on a plane form a triangle.
_________________
Intern
Joined: 01 Jan 2013
Posts: 29
Location: United States
Concentration: Entrepreneurship, Strategy
GMAT 1: 770 Q50 V47
WE: Consulting (Consulting)
Re: Right triangle ABC is to be drawn in the xy-plane so that [#permalink]
### Show Tags
30 May 2013, 21:14
Bunuel... you're a freaking genius. Get a job with NASA already.
Senior Manager
Joined: 03 Apr 2013
Posts: 274
Location: India
Concentration: Marketing, Finance
GMAT 1: 740 Q50 V41
GPA: 3
Re: Right triangle ABC is to be drawn in the xy-plane so that [#permalink]
### Show Tags
13 Nov 2013, 08:38
Another way of looking at the problem.
According to the given constraints, the co-ordinates have to be chosen this way :-
A(a,b) B(a,c) C(d,b) where a,b,c and d are arbitrary integers. If you check this satisfies the constraint that AB must be parallel to the Y-axis.
Drawing the triangle and rotating it will give you a rectangle whose sides will measure length= |b-c| and breadth= |a-d|.
This rectangle's area will be = |b-c| X |a-d|
Now after having realized this, you just have to choose values from the given ranges such that the area is always non-zero,
and this can be done in the following way,
!.
selecting a and d from the range [-6,2] which has 9 elements, derived as --> 2 - (-6) +1 = 9.
9C2 X 2 (2 because both a>d and d>a are permissible).
2. selecting b and c similarly
6C2 X 2.
3. Multiplying the two terms :-
9C2 X 6C2 X 2 X 2 = 2160.
Kudos if you liked it.
Do have a look at this approach Bunuel
_________________
Spread some love..Like = +1 Kudos
Intern
Joined: 22 Jun 2013
Posts: 11
Re: tough problem [#permalink]
### Show Tags
13 Nov 2013, 09:38
Bunuel wrote:
hrish88 wrote:
Right triangle ABC is to be drawn in the xy-plane so that the right angle is at A and AB is parallel to the y-axis. If the x- and y-coordinates of A, B, and C are to be integers that are consistent with the inequalities -6 ≤ x ≤ 2 and 4 ≤ y ≤ 9 , then how many different triangles can be drawn that will meet these conditions?
A.54
B.432
C.2160
D.2916
E.148,824
i ve got it right.but this problem is very time consuming.can anyone suggest shorter method
We have the rectangle with dimensions 9*6 (9 horizontal dots and 6 vertical). AB is parallel to y-axis and AC is parallel to x-axis.
Choose the (x,y) coordinates for vertex A: 9C1*6C1;
Choose the x coordinate for vertex C (as y coordinate is fixed by A): 8C1, (9-1=8 as 1 horizontal dot is already occupied by A);
Choose the y coordinate for vertex B (as x coordinate is fixed by A): 5C1, (6-1=5 as 1 vertical dot is already occupied by A).
9C1*6C*8C1*5C1=2160.
Answer: C.
Kudos Bunuel. Nice explanation.
_________________
KUDOS if you like my post .
Retired Moderator
Joined: 23 May 2018
Posts: 487
Location: Pakistan
Re: Right triangle ABC is to be drawn in the xy-plane so that [#permalink]
### Show Tags
26 Jun 2018, 01:40
Making a rough diagram for the axis helps.
_________________
If you can dream it, you can do it.
Practice makes you perfect.
Kudos are appreciated.
CEO
Status: GMATINSIGHT Tutor
Joined: 08 Jul 2010
Posts: 2931
Location: India
GMAT: INSIGHT
Schools: Darden '21
WE: Education (Education)
Re: Right triangle ABC is to be drawn in the xy-plane so that [#permalink]
### Show Tags
19 Sep 2018, 20:58
1
hrish88 wrote:
Right triangle ABC is to be drawn in the xy-plane so that the right angle is at A and AB is parallel to the y-axis. If the x- and y-coordinates of A, B, and C are to be integers that are consistent with the inequalities -6 ≤ x ≤ 2 and 4 ≤ y ≤ 9 , then how many different triangles can be drawn that will meet these conditions?
A. 54
B. 432
C. 2,160
D. 2,916
E. 148,824
Please check solution as attached.
Answer:Option C
Attachments
File comment: www.GMATinsight.com
Screen Shot 2018-09-20 at 9.26.09 AM.png [ 659.51 KiB | Viewed 3002 times ]
_________________
Prosper!!!
GMATinsight
Bhoopendra Singh and Dr.Sushma Jha
e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772
Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi
http://www.GMATinsight.com/testimonials.html
ACCESS FREE GMAT TESTS HERE:22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION
Intern
Joined: 10 Jan 2016
Posts: 5
Right triangle ABC is to be drawn in the xy-plane so that [#permalink]
### Show Tags
28 Sep 2018, 14:32
Bunuel wrote:
hrish88 wrote:
Right triangle ABC is to be drawn in the xy-plane so that the right angle is at A and AB is parallel to the y-axis. If the x- and y-coordinates of A, B, and C are to be integers that are consistent with the inequalities -6 ≤ x ≤ 2 and 4 ≤ y ≤ 9 , then how many different triangles can be drawn that will meet these conditions?
A.54
B.432
C.2160
D.2916
E.148,824
i ve got it right.but this problem is very time consuming.can anyone suggest shorter method
We have the rectangle with dimensions 9*6 (9 horizontal dots and 6 vertical). AB is parallel to y-axis and AC is parallel to x-axis.
Choose the (x,y) coordinates for vertex A: 9C1*6C1;
Choose the x coordinate for vertex C (as y coordinate is fixed by A): 8C1, (9-1=8 as 1 horizontal dot is already occupied by A);
Choose the y coordinate for vertex B (as x coordinate is fixed by A): 5C1, (6-1=5 as 1 vertical dot is already occupied by A).
9C1*6C*8C1*5C1=2160.
Answer: C.
Hi @bunnel ,
I have a doubt.
If AB is parallel to Y-axis, how can we count X=0 as a possibility for vertex A?
If X=0, when vertex A lies of the Y-axis and therefore, AB can't be parallel to Y-axis..
With this in mind, I got 8*6 possibilities for vertex A.
please help.
CEO
Status: GMATINSIGHT Tutor
Joined: 08 Jul 2010
Posts: 2931
Location: India
GMAT: INSIGHT
Schools: Darden '21
WE: Education (Education)
Re: Right triangle ABC is to be drawn in the xy-plane so that [#permalink]
### Show Tags
28 Sep 2018, 20:49
ganeshvenugopal wrote:
Bunuel wrote:
hrish88 wrote:
Right triangle ABC is to be drawn in the xy-plane so that the right angle is at A and AB is parallel to the y-axis. If the x- and y-coordinates of A, B, and C are to be integers that are consistent with the inequalities -6 ≤ x ≤ 2 and 4 ≤ y ≤ 9 , then how many different triangles can be drawn that will meet these conditions?
A.54
B.432
C.2160
D.2916
E.148,824
i ve got it right.but this problem is very time consuming.can anyone suggest shorter method
We have the rectangle with dimensions 9*6 (9 horizontal dots and 6 vertical). AB is parallel to y-axis and AC is parallel to x-axis.
Choose the (x,y) coordinates for vertex A: 9C1*6C1;
Choose the x coordinate for vertex C (as y coordinate is fixed by A): 8C1, (9-1=8 as 1 horizontal dot is already occupied by A);
Choose the y coordinate for vertex B (as x coordinate is fixed by A): 5C1, (6-1=5 as 1 vertical dot is already occupied by A).
9C1*6C*8C1*5C1=2160.
Answer: C.
Hi @bunnel ,
I have a doubt.
If AB is parallel to Y-axis, how can we count X=0 as a possibility for vertex A?
If X=0, when vertex A lies of the Y-axis and therefore, AB can't be parallel to Y-axis..
With this in mind, I got 8*6 possibilities for vertex A.
please help.
Here comes some cotradiction.
One definition says that parallel lines are lines that never intersect but they lack mentioning the fact that the lines should lie in one plane for it to happen.
Here the line is parallel to Y axis and Y-Axis is a direction while origin is just a point of reference from where the Y direction may be referenced so I believe that X=0 may be taken for the line which is parallel to Y-Axis.
_________________
Prosper!!!
GMATinsight
Bhoopendra Singh and Dr.Sushma Jha
e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772
Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi
http://www.GMATinsight.com/testimonials.html
ACCESS FREE GMAT TESTS HERE:22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION
Intern
Joined: 09 Aug 2018
Posts: 13
Re: Right triangle ABC is to be drawn in the xy-plane so that [#permalink]
### Show Tags
18 Mar 2019, 12:00
Bunuel
Thank you for your solution. I have a question: How can we ensure that the sum of any 2 sides of the triangle is greater than the third side and that the length is greater than the difference between the lengths of the other 2 sides? Thank you!
Re: Right triangle ABC is to be drawn in the xy-plane so that [#permalink] 18 Mar 2019, 12:00
Display posts from previous: Sort by
# Right triangle ABC is to be drawn in the xy-plane so that
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | 2019-05-24 10:12:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6759817004203796, "perplexity": 1526.2634699326252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257601.8/warc/CC-MAIN-20190524084432-20190524110432-00021.warc.gz"} |
https://www.electricalexams.co/signal-will-become-zero-when-the-feedback-signal-and-reference-signs-are-equal/ | # ____ signal will become zero when the feedback signal and reference signs are equal.
_______ signal will become zero when the feedback signal and reference signs are equal. | 2022-08-19 11:02:15 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9011914730072021, "perplexity": 645.9942022572145}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00448.warc.gz"} |
http://physics.stackexchange.com/questions/28971/how-large-can-you-make-a-tokamak?answertab=votes | # How large can you make a tokamak?
I've seen questions on how small you can make a tokamak. But I haven't yet seen any "physical" upper limit on the tokamak design.
If you take a wind turbine for example, doubling the linear dimensions will increase the sweep area by a factor 4 but the structural mass with a factor 8, which clearly explains why you don't want to make (convensional) wind turbines above certain dimensions.
With a tokamak, I imagine that if you double the linear dimensions, the plasma volume (and hence the power production) will increase eightfold, whereas area that you have to protect against fast neutrons will only quadruple. So once you master the tokamak technology, you would only need to scale it up appropriately to bring down capital costs.
What do I miss? What cannot be scaled up easily in a tokamak?
-
The scaling of structural materials with linear dimension has come up several times on this site and I won't quite concede the claim that it scales with a factor of 8. The vanilla skyscraper problem actually gives an exponential. If you assumed the height didn't change with turbine blade length, then you would commit yourself to probably a simple $l$ factor or an exponential. Coincidentally, the vertical air speed profile is also treated as an exponential for boundary layer physics. Naturally, all said relations are wrong, but I don't see the basis for $l^3$ at all. – AlanSE May 25 '12 at 14:31
add comment
## 2 Answers
The big problem with controlled fusion is that the equations governing the plasma are highly non-linear. So each time the physicist increase the size of the Tokamak, new effects are discovered. So I guess that the answer is no-one really knows the correct scaling laws !
This contrasts a lot with fission reactors, where the relevant equations are essentially linear (neutron diffusion). It was then possible to 'easily' scale up Enrico Fermi's first nuclear reactor Chicago_Pile-1 which had a power of just 0.5W in 1942 to the design of the B reactor in 1944, which had a power of 250 Megawatt. That is essentially a factor 500 millions between the first and the second nuclear reactor !
EDITED TO ADD
I've just found this wikipedia page about Dimensionless parameters in Tokamaks which is quantitative. It essentially says that constructing a 1:3 model of a power producing tokamak having the same turbulence transport processes is essentially infeasible, because it would need a too high magnetic field. Then, there is a discussion I don't fully understand in order to try to guess the properties of the large machine... In short: the turbulence in the plasma make the use of scaling laws difficult.
-
That's a very interesting observation, about non-linearity, which maybe also partly explains why it is taking so long time get fusion power viable as a commercial power source. – Joel May 25 '12 at 16:18
Anyway, so the answer to my question about scale-up is that "it may very well be possible to make huge tokamaks, but we just don't know. in any case, a simple linear scale-up isn't possible due to the large magnetic fields required". That answers my question, thanks! – Joel May 25 '12 at 16:23
@Joel: Actually, it seems (strangely) to be the opposite : it seems that building a small Tokamak is difficult, because the magnetic field would be too big ! Which explain why we have difficulties to make reduced models in order to validate the concept. – Frédéric Grosshans May 25 '12 at 16:48
add comment
You actually make reference to something which is of crucial importance to the answer to this question:
"With a tokamak, I imagine that if you double the linear dimensions, the plasma volume (and hence the power production) will increase eightfold, whereas area that you have to protect against fast neutrons will only quadruple. So once you master the tokamak technology, you would only need to scale it up appropriately to bring down capital costs."
You suggest that the fusion power of the tokamak scales roughly as $\sim R^{3}$ (where $R$ is the tokamak major radius), but the surface area inside the device onto which the fusion neutrons are incident only scales as $\sim R^{2}$.
This is fairly accurate, although the scaling of the fusion power is closer to $R^4$ or $R^5$, for reasons I will mention later. However, from the context of your remark it sounds like you're implying that this difference in scaling would make it advantageous to scale tokamaks up to an arbitrarily large size. The reality is quite the opposite in fact.
The fact that the internal surface area of the tokamak scales less aggressively with $R$ than the fusion power is quite possibly the most fundamental reason that we cannot build very large tokamaks. This is because the neutron flux on the inner wall of the device scales like the fusion power divided by the surface area, so roughly as $\sim R^2$.
The materials which line the inner wall of a tokamak can only withstand a particular fluence of fusion neutrons before they must be replaced, as the neutrons cause significant structural weakening. Replacing these components is an extremely time consuming and expensive affair, as it must be carried out entirely by remotely controlled robots due to unsafe levels of radioactivity inside the device. The interior wall of the JET tokamak was replaced recently and the project took over a year to complete.
So the the length of time for which you could run a tokamak fusion power plant before a major shutdown is required to replace the wall scales as $\sim R^{-2}$. Clearly this is a serious problem for a very large tokamak, as the inner wall will last an unfeasibly short time, making economically viable electricity generation impossible. After all, the goal of fusion energy research is to solve the looming energy crisis, so we must be able to produce electricity at a cost at least comparable to other renewable sources or there really isn't much point building a reactor in the first place!
Although this thread has been inactive for quite a while, you actually had the answer in the question without realising it so I felt the need to let you know!
A small aside: I mentioned earlier that the fusion power scales more like $R^{4}$. This is because a larger tokamak has a greater distance between the centre of the plasma and the wall, and this allows a higher core plasma pressure to be achieved. This in turn increases the fusion reaction rate, so as you increase $R$ not only do you have a greater plasma volume, but you're also getting more fusion power per unit volume.
-
Thanks for this interesting answer. But this neutron flow argument does not hold anymore when considering aneutronic fusion, right? – Joel Dec 6 '12 at 13:09
@Joel: When you say aneutronic fusion what reactions do you have in mind? The only reaction which will produce any meaningful power in a tokamak is the DT fusion reaction which does release a neutron. Probably the more important point is that we need the neutron to carry some of the energy released away from the plasma so that we can capture it as heat, which we then use to drive turbines. – CBowman Dec 6 '12 at 15:02
I was thinking mainly of D/He-3, although I know that it's not really aneutronic. I guess for truly aneutronic reactons, like p/B-11 or He-3/He-3, you are not going to use the tokamak design anyway. But your point that the maximum size of a tokamak is coupled to the acceptable neutron radiation intensity (as well as the choice of material) is very valid, of course. – Joel Dec 6 '12 at 19:17
add comment | 2013-12-21 04:03:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7471919655799866, "perplexity": 607.7488031376124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345774929/warc/CC-MAIN-20131218054934-00002-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://mathematica.stackexchange.com/questions/120049/how-to-turn-a-colour-entity-into-a-colour-directive | # How to turn a colour entity into a colour directive?
I am not experienced with Wolfram Knowledgebase queries and working with entities.
What is the "correct" way to convert a colour entity into a colour directive?
Example:
ent = StarData["Sun", "Color"]
(* Entity["Color", {"RGB", {1., 0.96, 0.93}}] *)
I want to convert ent to RGBColor[1., 0.96, 0.93].
Is there an API for this, or is the only way extracting the part of the Entity expression?
Somehow extracting directly "doesn't feel right", but I may be mistaken about how this is intended to be used. It also isn't very convenient to extract directly because when working with such queries, sometimes Missing[...] is returned. Missing values are gracefully handled by functions such as EntityValue, but I'd have to handle them myself if I extract directly. An example is list = StarData[EntityClass["Star", {EntityProperty["Star", "DistanceFromEarth"] -> TakeSmallest[50]}], "Color"], which has some missing value. Now EntityValue[list, "HSLValue"] simply skips missing values without failing, while I would have to do extra work to handle them myself.
Overall, this question is about how these functions are intended to be used, and what is the most convenient way to use them when chaining multiple queries. Extracting with Part is obviously trivial.
• FromEntity[] should do the trick. – J. M.'s discontentment Jul 5 '16 at 12:50
• @J.M. Excellent! Answer? Or should I delete? – Szabolcs Jul 5 '16 at 12:54
• I'm using a phone right now, so I'm not sure if it'd have worked. Please feel free to write something if it did work. :) – J. M.'s discontentment Jul 5 '16 at 12:57
• @J.M. Yes, this is easily found in the documentation. Yet I did spend time on it and I didn't find it. I guess I should keep this in mind when closing beginners' questions as a "simple mistake". – Szabolcs Jul 5 '16 at 12:57
FromEntity and ToEntity can do the conversion.
FromEntity[StarData["Sirius", "Color"]]
(* RGBColor[0.73, 0.8, 1.] *)
• Or you could hit it with a hammer: RGBColor @@ ent[[-1, -1]] – Bob Hanlon Jul 5 '16 at 15:42
ent is an Entity of "Color" you can investigate its properties.
EntityProperties["Color"]
Of the many properties the "Value" property (strangely it has CommonName "Wolfram Language") will return the RGB color you are seeking.
ent["Value"]
(* RGBColor[1., 0.96, 0.93] *)
Hope this helps.
• I thought I had tried EntityValue[ent, EntityProperties[ent]]. I must have only tried specific properties then. – Szabolcs Jul 23 '16 at 18:49 | 2020-09-23 14:11:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2914740741252899, "perplexity": 2623.750071597022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400210996.32/warc/CC-MAIN-20200923113029-20200923143029-00155.warc.gz"} |
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/1400/1/bl/a/ | # Properties
Label 1400.1.bl.a Level $1400$ Weight $1$ Character orbit 1400.bl Analytic conductor $0.699$ Analytic rank $0$ Dimension $4$ Projective image $D_{6}$ RM discriminant 8 Inner twists $8$
# Related objects
## Newspace parameters
Level: $$N$$ $$=$$ $$1400 = 2^{3} \cdot 5^{2} \cdot 7$$ Weight: $$k$$ $$=$$ $$1$$ Character orbit: $$[\chi]$$ $$=$$ 1400.bl (of order $$6$$, degree $$2$$, not minimal)
## Newform invariants
Self dual: no Analytic conductor: $$0.698691017686$$ Analytic rank: $$0$$ Dimension: $$4$$ Relative dimension: $$2$$ over $$\Q(\zeta_{6})$$ Coefficient field: $$\Q(\zeta_{12})$$ Defining polynomial: $$x^{4} - x^{2} + 1$$ Coefficient ring: $$\Z[a_1, a_2]$$ Coefficient ring index: $$1$$ Twist minimal: yes Projective image $$D_{6}$$ Projective field Galois closure of 6.0.5378240000.1
## $q$-expansion
The $$q$$-expansion and trace form are shown below.
$$f(q)$$ $$=$$ $$q + \zeta_{12}^{5} q^{2} -\zeta_{12}^{4} q^{4} + \zeta_{12}^{5} q^{7} + \zeta_{12}^{3} q^{8} -\zeta_{12}^{2} q^{9} +O(q^{10})$$ $$q + \zeta_{12}^{5} q^{2} -\zeta_{12}^{4} q^{4} + \zeta_{12}^{5} q^{7} + \zeta_{12}^{3} q^{8} -\zeta_{12}^{2} q^{9} -\zeta_{12}^{4} q^{14} -\zeta_{12}^{2} q^{16} + ( -\zeta_{12}^{3} - \zeta_{12}^{5} ) q^{17} + \zeta_{12} q^{18} -\zeta_{12}^{5} q^{23} + \zeta_{12}^{3} q^{28} + ( 1 + \zeta_{12}^{2} ) q^{31} + \zeta_{12} q^{32} + ( \zeta_{12}^{2} + \zeta_{12}^{4} ) q^{34} - q^{36} + ( -\zeta_{12}^{2} - \zeta_{12}^{4} ) q^{41} + \zeta_{12}^{4} q^{46} + ( -\zeta_{12} - \zeta_{12}^{3} ) q^{47} -\zeta_{12}^{4} q^{49} -\zeta_{12}^{2} q^{56} + ( -\zeta_{12} + \zeta_{12}^{5} ) q^{62} + \zeta_{12} q^{63} - q^{64} + ( -\zeta_{12} - \zeta_{12}^{3} ) q^{68} + q^{71} -\zeta_{12}^{5} q^{72} + \zeta_{12}^{2} q^{79} + \zeta_{12}^{4} q^{81} + ( \zeta_{12} + \zeta_{12}^{3} ) q^{82} + ( -1 + \zeta_{12}^{4} ) q^{89} -\zeta_{12}^{3} q^{92} + ( 1 + \zeta_{12}^{2} ) q^{94} + ( \zeta_{12} - \zeta_{12}^{5} ) q^{97} + \zeta_{12}^{3} q^{98} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$4q + 2q^{4} - 2q^{9} + O(q^{10})$$ $$4q + 2q^{4} - 2q^{9} + 2q^{14} - 2q^{16} + 6q^{31} - 4q^{36} - 2q^{46} + 2q^{49} - 2q^{56} - 4q^{64} + 4q^{71} + 2q^{79} - 2q^{81} - 6q^{89} + 6q^{94} + O(q^{100})$$
## Character values
We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/1400\mathbb{Z}\right)^\times$$.
$$n$$ $$351$$ $$701$$ $$801$$ $$1177$$ $$\chi(n)$$ $$1$$ $$-1$$ $$\zeta_{12}^{2}$$ $$-1$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
549.1
0.866025 + 0.500000i −0.866025 − 0.500000i 0.866025 − 0.500000i −0.866025 + 0.500000i
−0.866025 + 0.500000i 0 0.500000 0.866025i 0 0 −0.866025 + 0.500000i 1.00000i −0.500000 0.866025i 0
549.2 0.866025 0.500000i 0 0.500000 0.866025i 0 0 0.866025 0.500000i 1.00000i −0.500000 0.866025i 0
1349.1 −0.866025 0.500000i 0 0.500000 + 0.866025i 0 0 −0.866025 0.500000i 1.00000i −0.500000 + 0.866025i 0
1349.2 0.866025 + 0.500000i 0 0.500000 + 0.866025i 0 0 0.866025 + 0.500000i 1.00000i −0.500000 + 0.866025i 0
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
Char Parity Ord Mult Type
1.a even 1 1 trivial
8.b even 2 1 RM by $$\Q(\sqrt{2})$$
5.b even 2 1 inner
7.d odd 6 1 inner
35.i odd 6 1 inner
40.f even 2 1 inner
56.j odd 6 1 inner
280.bk odd 6 1 inner
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 1400.1.bl.a 4
5.b even 2 1 inner 1400.1.bl.a 4
5.c odd 4 1 1400.1.bf.a 2
5.c odd 4 1 1400.1.bf.b yes 2
7.d odd 6 1 inner 1400.1.bl.a 4
8.b even 2 1 RM 1400.1.bl.a 4
35.i odd 6 1 inner 1400.1.bl.a 4
35.k even 12 1 1400.1.bf.a 2
35.k even 12 1 1400.1.bf.b yes 2
40.f even 2 1 inner 1400.1.bl.a 4
40.i odd 4 1 1400.1.bf.a 2
40.i odd 4 1 1400.1.bf.b yes 2
56.j odd 6 1 inner 1400.1.bl.a 4
280.bk odd 6 1 inner 1400.1.bl.a 4
280.bv even 12 1 1400.1.bf.a 2
280.bv even 12 1 1400.1.bf.b yes 2
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
1400.1.bf.a 2 5.c odd 4 1
1400.1.bf.a 2 35.k even 12 1
1400.1.bf.a 2 40.i odd 4 1
1400.1.bf.a 2 280.bv even 12 1
1400.1.bf.b yes 2 5.c odd 4 1
1400.1.bf.b yes 2 35.k even 12 1
1400.1.bf.b yes 2 40.i odd 4 1
1400.1.bf.b yes 2 280.bv even 12 1
1400.1.bl.a 4 1.a even 1 1 trivial
1400.1.bl.a 4 5.b even 2 1 inner
1400.1.bl.a 4 7.d odd 6 1 inner
1400.1.bl.a 4 8.b even 2 1 RM
1400.1.bl.a 4 35.i odd 6 1 inner
1400.1.bl.a 4 40.f even 2 1 inner
1400.1.bl.a 4 56.j odd 6 1 inner
1400.1.bl.a 4 280.bk odd 6 1 inner
## Hecke kernels
This newform subspace is the entire newspace $$S_{1}^{\mathrm{new}}(1400, [\chi])$$.
## Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ $$1 - T^{2} + T^{4}$$
$3$ $$T^{4}$$
$5$ $$T^{4}$$
$7$ $$1 - T^{2} + T^{4}$$
$11$ $$T^{4}$$
$13$ $$T^{4}$$
$17$ $$9 + 3 T^{2} + T^{4}$$
$19$ $$T^{4}$$
$23$ $$1 - T^{2} + T^{4}$$
$29$ $$T^{4}$$
$31$ $$( 3 - 3 T + T^{2} )^{2}$$
$37$ $$T^{4}$$
$41$ $$( 3 + T^{2} )^{2}$$
$43$ $$T^{4}$$
$47$ $$9 + 3 T^{2} + T^{4}$$
$53$ $$T^{4}$$
$59$ $$T^{4}$$
$61$ $$T^{4}$$
$67$ $$T^{4}$$
$71$ $$( -1 + T )^{4}$$
$73$ $$T^{4}$$
$79$ $$( 1 - T + T^{2} )^{2}$$
$83$ $$T^{4}$$
$89$ $$( 3 + 3 T + T^{2} )^{2}$$
$97$ $$( -3 + T^{2} )^{2}$$ | 2021-03-08 10:49:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9964842200279236, "perplexity": 14117.428199135986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178383355.93/warc/CC-MAIN-20210308082315-20210308112315-00126.warc.gz"} |
https://indoxxiid.com/qa/question-what-is-not-a-real-number.html | # Question: What Is Not A Real Number?
## Is the number 9 real?
These are the set of all counting numbers such as 1, 2, 3, 4, 5, 6, 7, 8, 9, …….
Real numbers are the numbers which include both rational and irrational numbers.
Rational numbers such as integers (-2, 0, 1), fractions(1/2, 2.5) and irrational numbers such as √3, π(22/7), etc., are all real numbers..
## Is 64 a real number?
real number. … From the given numbers, −7 and 8 are integers. Also, notice that 64 is the square of 8 so −√64=−8 . So the integers are −7,8,−√64 − 7 , 8 , − 64 .
## Are numbers natural?
A natural number is an integer greater than 0. Natural numbers begin at 1 and increment to infinity: 1, 2, 3, 4, 5, etc. Natural numbers are also called “counting numbers” because they are used for counting. For example, if you are timing something in seconds, you would use natural numbers (usually starting with 1).
## Is square root of 7 a real number?
Not all square roots are whole numbers. Many square roots are irrational numbers, meaning there is no rational number equivalent. For example, 2 is the square root of 4 because \begin{align*}2 \times 2 = 4\end{align*}. The number 7 is the square root of 49 because \begin{align*}7 \times 7 = 49\end{align*}.
## What is the set of numbers?
The set of real numbers is made by combining the set of rational numbers and the set of irrational numbers. The set of real numbers is all the numbers that have a location on the number line. Integers …, -3, -2, -1, 0, 1, 2, 3, … Real numbers any number that is rational or irrational.
## What type of number is √ 64?
There is no decimal place value for 2 or 2/1. It is a rational square root number. Some other rational square root numbers are: √9, √16, √36, √49, √64.
## Which of the numbers are real which are not real?
Irrational numbers: Real numbers that are not rational. Imaginary numbers: Numbers that equal the product of a real number and the square root of −1. The number 0 is both real and imaginary.
## What is not a real number square root?
Negative numbers don’t have real square roots since a square is either positive or 0. The square roots of numbers that are not a perfect square are members of the irrational numbers. This means that they can’t be written as the quotient of two integers.
## What does R mean in math?
real numbersList of Mathematical Symbols • R = real numbers, Z = integers, N=natural numbers, Q = rational numbers, P = irrational numbers.
## Is √ 64 an irrational number?
Answer Expert Verified √64 is rational.
## Is 8 a irrational number?
The number 8 is a rational number because it can be written as the fraction 8/1. Likewise, 3/4 is a rational number because it can be written as a fraction.
## Is 0 a real number?
Real numbers consist of zero (0), the positive and negative integers (-3, -1, 2, 4), and all the fractional and decimal values in between (0.4, 3.1415927, 1/2). Real numbers are divided into rational and irrational numbers.
## What kind of number is zero?
1 Answer. 0 is a rational, whole, integer and real number. Some definitions include it as a natural number and some don’t (starting at 1 instead).
## What are root numbers?
The root of a number x is another number, which when multiplied by itself a given number of times, equals x. For example, the third root (also called the cube root) of 64 is 4, because if you multiply three fours together you get 64: 4 × 4 × 4 = 64.
## Is 13 a real number?
Is 13 real, natural, whole, rational, and prime? Yes. Since it is rational, it is also an integer.
## Is 17 a natural number?
Natural Numbers – the set of numbers, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,….., that we see and use every day. The natural numbers are often referred to as the counting numbers and the positive integers. Whole Numbers – the natural numbers plus the zero.
## What’s considered a real number?
Real numbers are, in fact, pretty much any number that you can think of. This can include whole numbers or integers, fractions, rational numbers and irrational numbers. Real numbers can be positive or negative, and include the number zero.
## Is 2/3 an irrational number?
In mathematics rational means “ratio like.” So a rational number is one that can be written as the ratio of two integers. For example 3=3/1, −17, and 2/3 are rational numbers. Most real numbers (points on the number-line) are irrational (not rational).
## What does R * mean in math?
extended real numbersThe set of projective projectively extended real numbers. Unfortunately, the notation is not standardized, so the set of affinely extended real numbers, denoted here , is also denoted. by some authors. | 2021-05-11 20:02:13 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8617833256721497, "perplexity": 478.8222103558585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989856.11/warc/CC-MAIN-20210511184216-20210511214216-00417.warc.gz"} |
https://unapologetic.wordpress.com/category/geometry/analytic-geometry/ | # The Unapologetic Mathematician
## The Hodge Star
Sorry for the delay from last Friday to today, but I was chasing down a good lead.
Anyway, last week I said that I’d talk about a linear map that extends the notion of the correspondence between parallelograms in space and perpendicular vectors.
First of all, we should see why there may be such a correspondence. We’ve identified $k$-dimensional parallelepipeds in an $n$-dimensional vector space $V$ with antisymmetric tensors of degree $k$: $A^k(V)$. Of course, not every such tensor will correspond to a parallelepiped (some will be linear combinations that can’t be written as a single wedge of $k$ vectors), but we’ll just keep going and let our methods apply to such more general tensors. Anyhow, we also know how to count the dimension of the space of such tensors:
$\displaystyle\dim\left(A^k(V)\right)=\binom{n}{k}=\frac{n!}{k!(n-k)!}$
This formula tells us that $A^k(V)$ and $A^{n-k}(V)$ will have the exact same dimension, and so it makes sense that there might be an isomorphism between them. And we’re going to look for one which defines the “perpendicular” $n-k$-dimensional parallelepiped with the same size.
So what do we mean by “perpendicular”? It’s not just in terms of the “angle” defined by the inner product. Indeed, in that sense the parallelograms $e_1\wedge e_2$ and $e_1\wedge e_3$ are perpendicular. No, we want any vector in the subspace defined by our parallelepiped to be perpendicular to any vector in the subspace defined by the new one. That is, we want the new parallelepiped to span the orthogonal complement to the subspace we start with.
Our definition will also need to take into account the orientation on $V$. Indeed, considering the parallelogram $e_1\wedge e_2$ in three-dimensional space, the perpendicular must be $ce_3$ for some nonzero constant $c$, or otherwise it won’t be perpendicular to the whole $x$$y$ plane. And $\vert c\vert$ has to be ${1}$ in order to get the right size. But will it be $+e_3$ or $-e_3$? The difference is entirely in the orientation.
Okay, so let’s pick an orientation on $V$, which gives us a particular top-degree tensor $\omega$ so that $\mathrm{vol}(\omega)=1$. Now, given some $\eta\in A^k(V)$, we define the Hodge dual $*\eta\in A^{n-k}(V)$ to be the unique antisymmetric tensor of degree $n-k$ satisfying
$\displaystyle\zeta\wedge*\eta=\langle\zeta,\eta\rangle\omega$
for all $\zeta\in A^k(V)$. Notice here that if $\eta$ and $\zeta$ describe parallelepipeds, and any side of $\zeta$ is perpendicular to all the sides of $\eta$, then the projection of $\zeta$ onto the subspace spanned by $\eta$ will have zero volume, and thus $\langle\zeta,\eta\rangle=0$. This is what we expect, for then this side of $\zeta$ must lie within the perpendicular subspace spanned by $*\eta$, and so the wedge $\zeta\wedge*\eta$ should also be zero.
As a particular example, say we have an orthonormal basis $\{e_i\}_{i=1}^n$ of $V$ so that $\omega=e_1\wedge\dots\wedge e_n$. Then given a multi-index $I=(i_1,\dots,i_k)$ the basic wedge $e_I$ gives us the subspace spanned by the vectors $\{e_{i_1},\dots,e_{i_k}\}$. The orthogonal complement is clearly spanned by the remaining basis vectors $\{e_{j_1},\dots,e_{j_{n-k}}\}$, and so $*e_I=\pm e_J$, with the sign depending on whether the list $(i_1,\dots,i_k,j_1,\dots,j_{n-k})$ is an even or an odd permutation of $(1,\dots,n)$.
To be even more explicit, let’s work these out for the cases of dimensions three and four. First off, we have a basis $\{e_1,e_2,e_3\}$. We work out all the duals of basic wedges as follows:
\displaystyle\begin{aligned}*1&=e_1\wedge e_2\wedge e_3\\ *e_1&=e_2\wedge e_3\\ *e_2&=-e_1\wedge e_3=e_3\wedge e_1\\ *e_3&=e_1\wedge e_2\\ *(e_1\wedge e_2)&=e_3\\ *(e_1\wedge e_3)&=-e_2\\ *(e_2\wedge e_3)&=e_1\\ *(e_1\wedge e_2\wedge e_3)&=1\end{aligned}
This reconstructs the correspondence we had last week between basic parallelograms and perpendicular basis vectors. In the four-dimensional case, the basis $\{e_1,e_2,e_3,e_4\}$ leads to the duals
\displaystyle\begin{aligned}*1&=e_1\wedge e_2\wedge e_3\wedge e_4\\ *e_1&=e_2\wedge e_3\wedge e_4\\ *e_2&=-e_1\wedge e_3\wedge e_4\\ *e_3&=e_1\wedge e_2\wedge e_4\\\ *e_4&=-e_1\wedge e_2\wedge e_3\\ *(e_1\wedge e_2)&=e_3\wedge e_4\\ *(e_1\wedge e_3)&=-e_2\wedge e_4\\ *(e_1\wedge e_4)&=e_2\wedge e_3\\ *(e_2\wedge e_3)&=e_1\wedge e_4\\ *(e_2\wedge e_4)&=-e_1\wedge e_3\\ *(e_3\wedge e_4)&=e_1\wedge e_2\\ *(e_1\wedge e_2\wedge e_3)&=e_4\\ *(e_1\wedge e_2\wedge e_4)&=-e_3\\ *(e_1\wedge e_3\wedge e_4)&=e_2\\ *(e_2\wedge e_3\wedge e_4)&=-e_1\\ *(e_1\wedge e_2\wedge e_3\wedge e_4)&=1\end{aligned}
It’s not a difficult exercise to work out the relation $**\eta=(-1)^{k(n-k)}\eta$ for a degree $k$ tensor in an $n$-dimensional space.
November 9, 2009
## An Example of a Parallelogram
Today I want to run through an example of how we use our new tools to read geometric information out of a parallelogram.
I’ll work within $\mathbb{R}^3$ with an orthonormal basis $\{e_1, e_2, e_3\}$ and an identified origin $O$ to give us a system of coordinates. That is, given the point $P$, we set up a vector $\overrightarrow{OP}$ pointing from $O$ to $P$ (which we can do in a Euclidean space). Then this vector has components in terms of the basis:
$\displaystyle\overrightarrow{OP}=xe_1+ye_2+ze_3$
and we’ll write the point $P$ as $(x,y,z)$.
So let’s pick four points: $(0,0,0)$, $(1,1,0)$, $(2,1,1)$, and $(1,0,1)$. These four point do, indeed, give the vertices of a parallelogram, since both displacements from $(0,0,0)$ to $(1,1,0)$ and from $(1,0,1)$ to $(2,1,1)$ are $e_1+e_2$, and similarly the displacements from $(0,0,0)$ to $(1,0,1)$ and from $(1,1,0)$ to $(2,1,1)$ are both $e_1+e_3$. Alternatively, all four points lie within the plane described by $x=y+z$, and the region in this plane contained between the vertices consists of points $P$ so that
$\displaystyle\overrightarrow{OP}=u(e_1+e_2)+v(e_1+e_3)$
for some $u$ and $v$ both in the interval $[0,1]$. So this is a parallelogram contained between $e_1+e_2$ and $e_1+e_3$. Incidentally, note that the fact that all these points lie within a plane means that any displacement vector between two of them is in the kernel of some linear transformation. In this case, it’s the linear functional $\langle e_1-e_2-e_3,\underline{\hphantom{X}}\rangle$, and the vector $e_1-e_2-e_3$ is perpendicular to any displacement in this plane, which will come in handy later.
Now in a more familiar approach, we might say that the area of this parallelogram is its base times its height. Let’s work that out to check our answer against later. For the base, we take the length of one vector, say $e_1+e_2$. We use the inner product to calculate its length as $\sqrt{2}$. For the height we can’t just take the length of the other vector. Some basic trigonometry shows that we need the length of the other vector (which is again $\sqrt{2}$) times the sine of the angle between the two vectors. To calculate this angle we again use the inner product to find that its cosine is $\frac{1}{2}$, and so its sine is $\frac{\sqrt{3}}{2}$. Multiplying these all together we find a height of $\sqrt{\frac{3}{2}}$, and thus an area of $\sqrt{3}$.
On the other hand, let’s use our new tools. We represent the parallelogram as the wedge $(e_1+e_2)\wedge(e_1+e_3)$ — incidentally choosing an orientation of the parallelogram and the entire plane containing it — and calculate its length using the inner product on the exterior algebra:
\displaystyle\begin{aligned}\mathrm{vol}\left((e_1+e_2)\wedge(e_1+e_3)\right)^2&=2!\langle(e_1+e_2)\wedge(e_1+e_3),(e_1+e_2)\wedge(e_1+e_3)\rangle\\&=2!\frac{1}{2!}\det\begin{pmatrix}\langle e_1+e_2,e_1+e_2\rangle&\langle e_1+e_2,e_1+e_3\rangle\\\langle e_1+e_3,e_1+e_2\rangle&\langle e_1+e_3,e_1+e_3\rangle\end{pmatrix}\\&=\det\begin{pmatrix}2&1\\1&2\end{pmatrix}\\&=\left(2\cdot2-1\cdot1\right)=3\end{aligned}
Alternately, we could calculate it by expanding in terms of basic wedges. That is, we can write
\displaystyle\begin{aligned}(e_1+e_2)\wedge(e_1+e_3)&=e_1\wedge e_1+e_1\wedge e_3+e_2\wedge e_1+e_2\wedge e_3\\&=e_2\wedge e_3-e_3\wedge e_1-e_1\wedge e_2\end{aligned}
This tells us that if we take our parallelogram and project it onto the $y$$z$ plane (which has an orthonormal basis $\{e_2,e_3\}$) we get an area of ${1}$. Similarly, projecting our parallelogram onto the $x$$y$ plane (with orthonormal basis $\{e_1,e_2\}$ we get an area of $-1$. That is, the area is ${1}$ and the orientation of the projected parallelogram disagrees with that of the plane. Anyhow, now the squared area of the parallelogram is the sum of the squares of these projected areas: $1^2+(-1)^2+(-1)^2=3$.
Notice, now, the similarity between this expression $e_2\wedge e_3-e_3\wedge e_1-e_1\wedge e_2$ and the perpendicular vector we found before: $e_1-e_2-e_3$. Each one is the sum of three terms with the same choices of signs. The terms themselves seem to have something to do with each other as well; the wedge $e_2\wedge e_3$ describes an area in the $y$$z$ plane, while $e_1$ describes a length in the perpendicular $x$-axis. Similarly, $e_1\wedge e_2$ describes an area in the $x$$y$ plane, while $e_3$ describes a length in the perpendicular $z$-axis. And, magically, the sum of these three perpendicular vectors to these three parallelograms gives the perpendicular vector to their sum!
There is, indeed, a linear correspondence between parallelograms and vectors that extends this idea, which we will explore tomorrow. The seemingly-odd choice of $e_3\wedge e_1$ to correspond to $e_2$, though, should be a tip-off that this correspondence is closely bound up with the notion of orientation.
November 5, 2009
## Parallelepipeds and Volumes III
So, why bother with this orientation stuff, anyway? We’ve got an inner product on spaces of antisymmetric tensors, and that should give us a concept of length. Why can’t we just calculate the size of a parallelepiped by sticking it into this bilinear form twice?
Well, let’s see what happens. Given a $k$-dimensional parallelepiped with sides $v_1$ through $v_k$, we represent the parallelepiped by the wedge $\omega=v_1\wedge\dots\wedge v_k$. Then we might try defining the volume by using the renormalized inner product
$\displaystyle\mathrm{vol}(\omega)^2=k!\langle\omega,\omega\rangle$
Let’s expand one copy of the wedge $\omega$ out in terms of our basis of wedges of basis vectors
$\displaystyle k!\langle\omega,\omega\rangle=k!\langle\omega,\omega^Ie_I\rangle=k!\langle\omega,e_I\rangle\omega^I$
where the multi-index $I$ runs over all increasing $k$-tuples of indices $1\leq i_1<\dots. But we already know that $\omega^I=k!\langle\omega,e_I\rangle$, and so this is squared-volume is the sum of the squares of these components, just like we’re familiar with. Then we can define the $k$-volume of the parallelepiped as the square root of this sum.
Let’s look specifically at what happens for top-dimensional parallelepipeds, where $k=n$. Then we only have one possible multi-index $I=(1,\dots,n)$, with coefficient
$\displaystyle\omega^{1\dots n}=n!\langle e_1\wedge\dots\wedge e_n,v_1\wedge\dots\wedge v_n\rangle=\det\left(v_j^i\right)$
$\displaystyle\mathrm{vol}(\omega)=\sqrt{\left(\det\left(v_j^i\right)\right)^2}=\left\lvert\det\left(v_j^i\right)\right\rvert$
So we get the magnitude of the volume without having to worry about choosing an orientation. Why even bother?
Because we already do care about orientation. Let’s go all the way back to one-dimensional parallelepipeds, which are just described by vectors. A vector doesn’t just describe a certain length, it describes a length along a certain line in space. And it doesn’t just describe a length along that line, it describes a length in a certain direction along that line. A vector picks out three things:
• A one-dimensional subspace $L$ of the ambient space $V$.
• An orientation of the subspace $L$.
• A volume (length) of this oriented subspace.
And just like vectors, nondegenerate $k$-dimensional parallelepipeds pick out three things
• A $k$-dimensional subspace $L$ of the ambient space $V$.
• An orientation of the subspace $L$.
• A $k$-dimensional volume of this oriented subspace.
The difference is that when we get up to the top dimension the space itself can have its own orientation, which may or may not agree with the orientation induced by the parallelepiped. We don’t always care about this disagreement, and we can just take the absolute value to get rid of a sign if we don’t care, but it might come in handy.
November 4, 2009
## Parallelepipeds and Volumes II
Yesterday we established that the $k$-dimensional volume of a parallelepiped with $k$ sides should be an alternating multilinear functional of those $k$ sides. But now we want to investigate which one.
The universal property of spaces of antisymmetric tensors says that any such functional corresponds to a unique linear functional $V_k:A^k\left(\mathbb{R}^n\right)\rightarrow\mathbb{R}$. That is, we take the parallelepiped with sides $v_1$ through $v_k$ and represent it by the antisymmetric tensor $v_1\wedge\dots\wedge v_k$. Notice, in particular, that if the parallelepiped is degenerate then this tensor is ${0}$, as we hoped. Then volume is some linear functional that takes in such an antisymmetric tensor and spits out a real number. But which linear functional?
I’ll start by answering this question for $n$-dimensional parallelepipeds in $n$-dimensional space. Such a parallelepiped is represented by an antisymmetric tensor with the $n$ sides as its tensorands. But we’ve calculated the dimension of the space of such tensors: $\dim\left(A^n\left(\mathbb{R}^n\right)\right)=1$. That is, once we represent these parallelepipeds by antisymmetric tensors there’s only one parameter left to distinguish them: their volume. So if we specify the volume of one parallelepiped linearity will take care of all the others.
There’s one parallelepiped whose volume we know already. The unit $n$-cube must have unit volume. So, to this end, pick an orthonormal basis $\left\{e_i\right\}_{i=1}^n$. A parallelepiped with these sides corresponds to the antisymmetric tensor $e_1\wedge\dots\wedge e_n$, and the volume functional must send this to ${1}$. But be careful! The volume doesn’t depend just on the choice of basis, but on the order of the basis elements. Swap two of the basis elements and we should swap the sign of the volume. So we’ve got two different choices of volume functional here, which differ exactly by a sign. We call these two choices “orientations” on our vector space.
This is actually not as esoteric as it may seem. Almost all introductions to vectors — from multivariable calculus to vector-based physics — talk about “left-handed” and “right-handed” coordinate systems. These differ by a reflection, which would change the signs of all parallelepipeds. So we must choose one or the other, and choose which unit cube will have volume ${1}$ and which will have volume $-1$. The isomorphism from $\Lambda(V)$ to $\Lambda(V)^*$ then gives us a “volume form” $\mathrm{vol}\left(\underline{\hphantom{X}}\right)=n!\langle e_1\wedge\dots\wedge e_n,\underline{\hphantom{X}}\rangle$, which will give us the volume of a parallelepiped represented by a given top-degree wedge.
Once we’ve made that choice, what about general parallelepipeds? If we have sides $\left\{v_1\right\}_{i=1}^n$ — written in components as $v_i^je_j$ — we represent the parallelepiped by the wedge $v_1\wedge\dots\wedge v_n$. This is the image of our unit cube under the transformation sending $e_i$ to $v_i$, and so we find
\displaystyle\begin{aligned}\mathrm{vol}\left(v_1\wedge\dots\wedge v_n\right)&=n!\langle e_1\wedge\dots\wedge e_n,v_1\wedge\dots\wedge v_n\rangle\\&=\det\left(\langle e_i,v_j\rangle\right)\\&=\det\left(v_j^i\right)\end{aligned}
The volume of the parallelepiped is the determinant of this transformation.
Incidentally, this gives a geometric meaning to the special orthogonal group $\mathrm{SO}(n,\mathbb{R})$. Orthogonal transformations send orthonormal bases to other orthonormal bases, which will send unit cubes to other unit cubes. But the determinant of an orthogonal transformation may be either $+1$ or $-1$. Transformations of the first kind make up the special orthogonal group, while transformations of the second kind send “positive” unit cubes to “negative” ones, and vice-versa. That is, they involve some sort of reflection, swapping the choice of orientation we made above. Special orthogonal transformations are those which preserve not only lengths and angles, but the orientation of the space. More generally, there is a homomorphism $\mathrm{GL}(n,\mathbb{R})\rightarrow\mathbb{Z}_2$ sending a transformation to the sign of its determinant. Transformations with positive determinant are said to be “orientation-preserving”, while those with negative determinant are said to be “orientation-reversing”.
November 3, 2009
## Parallelepipeds and Volumes I
And we’re back with more of what Mr. Martinez of Harvard’s Medical School assures me is onanism of the highest caliber. I’m sure he, too, blames me for not curing cancer.
Coming up in our study of calculus in higher dimensions we’ll need to understand parallelepipeds, and in particular their volumes. First of all, what is a parallelepiped? Or, more specifically, what is a $k$-dimensional parallelepiped in $n$-dimensional space? It’s a collection of points in space that we can describe as follows. Take a point $p$ and $k$ vectors $\left\{v_i\right\}_{i=1}^k$ in $\mathbb{R}^n$. The parallelepiped is the collection of points reachable by moving from $p$ by some fraction of each of the vectors $v_i$. That is, we pick $k$ values $t^i$, each in the interval $\left[0,1\right]$, and use them to specify the point $p+t^iv_i$. The collection of all such points is the parallelepiped with corner $p$ and sides $v_i$.
One possible objection is that these sides may not be linearly independent. If the sides are linearly independent, then they span a $k$-dimensional subspace of the ambient space, justifying our calling it $k$-dimensional. But if they’re not, then the subspace they span has a lower dimension. We’ll deal with this by calling such a parallelepiped “degenerate”, and the nice ones with linearly independent sides “nondegenerate”. Trust me, things will be more elegant in the long run if we just deal with them both on the same footing.
Now we want to consider the volume of a parallelepiped. The first observation is that the volume doesn’t depend on the corner point $p$. Indeed, we should be able to slide the corner around to any point in space as long as we bring the same displacement vectors along with us. So the volume should be a function only of the sides.
The second observation is that as a function of the sides, the volume function should commute with scalar multiplication in each variable separately. That is, if we multiply $v_i$ by a non-negative factor of $\lambda$, then we multiply the whole volume of the parallelepiped by $\lambda$ as well. But what about negative scaling factors? What if we reflect the side (and thus the whole parallelepiped) to point the other way? One answer might be that we get the same volume, but it’s going to be easier (and again more elegant) if we say that the new parallelepiped has the negative of the original one’s volume.
Negative volume? What could that mean? Well, we’re going to move away from the usual notion of volume just a little. Instead, we’re going to think of “signed” volume, which includes the possibility of being positive or negative. By itself, this sign will be less than clear at first, but we’ll get a better understanding as we go. As a first step we’ll say that two parallelepipeds related by a reflection have opposite signs. This won’t only cover the above behavior under scaling sides, but also what happens when we exchange the order of two sides. For example, the parallelogram with sides $v_1=a$ and $v_2=b$ and the parallelogram with sides $v_1=b$ and $v_2=a$ have the same areas with opposite signs. Similarly, swapping the order of two sides in a given parallelepiped will flip its sign.
The third observation is that the volume function should be additive in each variable. One way to see this is that the $k$-dimensional volume of the parallelepiped with sides $v_1$ through $v_k$ should be the product of the $k-1$-dimensional volume of the parallelepiped with sides $v_1$ through $v_{k-1}$ and the length of the component of $v_k$ perpendicular to all the other sides, and this length is a linear function of $v_k$. Since there’s nothing special here about the last side, we could repeat the argument with the other sides.
The other way to see this fact is to consider the following diagram, helpfully supplied by Kate from over at f(t):
The side of one parallelogram is the (vector) sum of the sides of the other two, and we can see that the area of the one parallelogram is the sum of the areas of the other two. This justifies the assertion that for parallelograms in the plane, the area is additive as a function of one side (and, similarly, of the other). Similar diagrams should be apparent to justify the assertion for higher-dimensional parallelepipeds in higher-dimensional spaces.
Putting all these together, we find that the $k$-dimensional volume of a parallelepiped with $k$ sides is an alternating multilinear functional, with the $k$ sides as variables, and so it lives somewhere in the exterior algebra $\Lambda(V^*)$. We’ll have to work out which particular functional gives us a good notion of volume as we continue.
November 2, 2009 | 2017-12-15 02:29:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 217, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9080173969268799, "perplexity": 231.8333654210977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948563083.64/warc/CC-MAIN-20171215021156-20171215041156-00356.warc.gz"} |
https://www.geeksforgeeks.org/mathematics-probability-distributions-set-4-binomial-distribution/?ref=rp | # Mathematics | Probability Distributions Set 4 (Binomial Distribution)
• Difficulty Level : Easy
• Last Updated : 17 Jun, 2021
The previous articles talked about some of the Continuous Probability Distributions. This article covers one of the distributions which are not continuous but discrete, namely the Binomial Distribution
Attention reader! Don’t stop learning now. Practice GATE exam well before the actual exam with the subject-wise and overall quizzes available in GATE Test Series Course.
Learn all GATE CS concepts with Free Live Classes on our youtube channel.
Introduction –
To understand the Binomial distribution, we must first understand what a Bernoulli Trial is. A Bernoulli trial is a random experiment with only two possible outcomes. These two outcomes are usually referred to as Success and Failure, but they may be given any label necessary. Each Bernoulli trial or a Random experiment is independent of the other.
For example, consider the scenario where we need to find the probability of the event of a even number showing up on die roll.
If E = Even number shows up, then
Here (or simply ‘p’) may be referred to as the probability of Success and (or simply ‘q’) may be referred as the probability of Failure. Notice that,
, since there are only two possible outcomes.
Now consider that the experiment is repeated and we try to find the probability of success. We get,
This is the same probability as the first experiment. This is because the two experiments are independent i.e. the outcome of one experiment does not affect the other.
Now that we know what a Bernoulli trial is, we can move on to understand the Binomial Distribution.
A random experiment consists of n Bernoulli trials such that
1. The trials are independent.
2. Each trial results in only two possible outcomes, labeled as “success” and “failure.”
3. The probability of a success in each trial, denoted as p, remains constant.
The random variable X that equals the number of trials that result in a success
is a binomial random variable
with parameters 0 < p < 1 and n = 1, 2, ….
The probability mass function is given by-
Probability Mass Function –
The above stated probability mass function is a legitimate probability function.
Notice that in the above formula, if we put n=1, we get the same result as a Bernoulli trial. Here x can take value 0 or 1(since number of successes can be 0 or 1 in one experiment).
Expected Value –
To find the Expected Value of the Binomial Distribution, let’s first find out the Expected value of a Bernoulli trial. Let p and q be the probabilities of Success(1) and Failure(0).
Since the Binomial Distribution has n Bernoulli trials, the expected Value is multiplied by n. This is due to the fact that each experiment is independent and the Expected value of the sum of Random variables is equal to the sum of their individual Expected Values. This property is also called the Linearity of Expectation
Variance and Standard deviation –
The variance of the Binomial distribution can be found in a similar way. For n independent Random Variables,
Here, Var[BT] is the Variance of 1 Bernoulli trial.
Using this result to find out the variance of the Binomial Distribution.
The Standard Deviation of the distribution-
• Example – An airline sells 65 tickets for a plane with capacity of 60 passengers. This is done because it is possible for some people to not show up. The probability of a person not showing up for the flight is 0.1. All passengers behave independently. Find the probability of the event that the airline does not have to arrange separate tickets for excess people.
• Solution – If more than 60 people show up, then the airline has to reschedule tickets for the excess number of people. Let X be the random variable denoting the number of passengers that show up. We have to find the probability of the event where X <=60.
Let p be the probability that a passenger shows up. p = 1 – 0.1 = 0.9.
q = 0.1
References –
My Personal Notes arrow_drop_up | 2021-11-27 15:52:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8216699957847595, "perplexity": 306.7200965400488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358189.36/warc/CC-MAIN-20211127133237-20211127163237-00446.warc.gz"} |