url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://fjellrypa.east.no/6gjaud/sorting-algorithms-time-complexity-908b12
08 jan ### sorting algorithms time complexity We will talk about sorting algorithm later. Use Entity Framework Core 5.0 In .NET Core 3.1 With MySQL Database By … Imagine a telephone book application that would take a day to sort all the numbers after a new number was added. Complexity of Quick Sort: Merge Sort: It is a sorting algorithm which follows the divide and conquers methodology. learning sw Yoo. sort(Object[]) is based on the TimSort algorithm, giving us a time complexity of O(n log(n)). Selection Sort is the easiest approach to sorting. Algorithms with higher complexity class might be faster in practice, if you always have small inputs. However, the time complexity of an algorithm also depends on the hardware, operating system, processors, etc. Selection Sort Algorithm Space Complexity is O(1). Let’s learning about an algorithm that finds k-th elemen using median of medians to ensure linear time. Any comparison based sorting algorithm can be made stable by using position as a criteria when two elements are compared. Time complexity is a function describing the amount of time an algorithm takes in terms of the amount of input to the algorithm. Space and time complexity acts as a measurement scale for algorithms. Its overall time complexity is O(nlogn). C# 9 Cheat Sheet. Time Complexity in Sorting Algorithms. Let’s take it as an example. In sorting, time complexity is based on how many operations or actions (how much time) it takes to locate or arrange data structures in a search. calculation only. A Sorting Algorithm is used to rearrange a given array or list elements according to a comparison operator on the elements. 21. if for an algorithm time complexity is given by O(n2) then complexity will: A. constant B. quardratic C. exponential D. none of the mentioned. Quick sort with median-of-medians algorithm. If for an algorithm time complexity is given by O((3/2)^n) then complexity will: A. constant B. quardratic C. exponential D. none of the mentioned View Answer. 2. All the basic arithmetic operations (addition, subtraction, multiplication, division, and comparison) can be done in polynomial time. This swapping process continues until we sort the input list. Time complexity is a way to describe how much time an algorithm needs to finish executing relative to the size of the input. 02. In short, TimSort makes use of the Insertion sort and the MergeSort algorithms. Analyzing the time it takes for an algorithm to give output is of crucial importance. To recap time complexity estimates how an algorithm performs regardless of the kind of machine it runs on. Drop constants and lower order terms. It is nevertheless important for you to understand these basic algorithms, because you are likely to use them within your own programs – their space and time complexity will thus affect that of your own algorithms. There are many sorting algorithms in Computer Science Data Structure, and most of those give the same time complexity which is O(nlogn), where n represents the total number of elements present in the given structure, and the sorting algorithms which satisfy this time complexity are Merge sort, Quick-sort, Heap sort, etc. While the version we've showcased is memory-consuming, there are more complex versions of Merge Sort that take up only O(1) space. Quicksort algorithm is one of the most efficient sorting algorithms, and that’s why it is mostly used as it is one of the best algorithms. e.g. 06. The time complexity of these algorithms are calculated and recorded. E.g. These factors do affect the time taken to execute the algorithm. Thus it runs in time () and is a polynomial time algorithm. As a programmer, … There is another sorting algorithm Counting sort which time complexity … Bubble sort is beneficial when array elements are less and the array is nearly sorted. n indicates the input size, while O is the worst-case scenario growth rate function. Therefore, it is crucial to analyse all the factors before executing the algorithm, and it is essential to select a suitable Sorting Algorithm to achieve efficiency and effectiveness in time complexity. This tutorial covers two different ways to measure the runtime of sorting algorithms: For a practical point of view, you’ll measure the runtime of the implementations using the timeit module. This time complexity is defined as a function of the input size n using Big-O notation. Merge sort has a guaranteed time complexity of O (n l o g n) O(nlogn) O (n l o g n) time, which is significantly faster than the average and worst-case running times of several other sorting algorithms. Time Complexity comparison of Sorting Algorithms and Space Complexity comparison of Sorting Algorithms. Here, the concept of space and time complexity of algorithms comes into existence. 04. The total amount of the computer's memory used by an algorithm when it is executed is the space complexity of that algorithm … Bubble sort works by continuously swapping the adjacent elements if they appear in the wrong order in the original input list. This is an extremely good time complexity for a sorting algorithm, since it has been proven that an array can't be sorted any faster than O(nlog n). Big O = Big Order function. 05. This recursion is continued until a solution is not found that can be solved easily. Some most common of these are merge sort, heap sort, and quicksort. The worst case time complexity of bubble sort algorithm is O(n 2). In-place/Outplace technique – A sorting technique is inplace if it does not use any extra memory to sort the array. The most common metric it’s using Big O notation. Shell sort is an insertion sort that first partially sorts its data and then finishes the sort by running an insertion sort algorithm on the entire array. For the given data set, quick sort is found very efficient and has taken 168 ms for 1000 data inputs. The time complexity of quicksort is O(n log n) in the best case, O(n log n) in the average case, and O(n^2) in the worst case. Click to see full answer Keeping this in consideration, how do you find the time complexity of a radix sort? It is an in-place sorting algorithm. Bubble sort algorithm is easy to understand from the example itself. We compare the algorithms on the basis of their space (amount of memory) and time complexity (number of operations). No sweat, no sweet. 32 minute read geeksforgeeks. The time complexity of an algorithm is NOT the actual time required to execute a particular code, since that depends on other factors like programming language, operating software, processing power, etc. B. How To Add A Document Viewer In Angular 10. The letter O is used to indicate the time complexity component of sorting. Prototype Design Pattern With Java. In extreme cases, if the data is already ordered, the sorting algorithm does not need to do any operation, and the execution time will be very short. The time complexity of radix sort is given by the formula,T(n) = O(d*(n+b)), where d is the number of digits in the given list, n is the number of elements in the list, and b is the base or bucket size used, which is normally base 10 for decimal representation. But unlike quick sort Merge sort is not an adaptive sorting algorithm as the time complexity of Merge sort does not depends on the initial input sequence of the given array. The different sorting techniques like bubble sort, selection sort, insertion sort, quick sort and merge sort are implemented using C. The input values varying from 100 to 1000 are system generated. Some types of algorithms are more efficient than others for searching. This complexity means that the algorithm’s run time increases slightly faster than the number of items in the vector. Practical sorting algorithms are usually based on algorithms with average time complexity. C. Counting Sort is not a comparison based sorting algorithm. In the above sorting algorithm, if we look at the code, we find that even if our array is already sorted, the time complexity will be the same i.e. We’ll present the pseudocode of the algorithm and analyze its time complexity. And use it to quick sort algorithm. Please refer to the bubble sort algorithm explained with an example. You can get the time complexity by “counting” the number of operations performed by your code. sort; sorting algorithm; space complexity; time complexity; TRENDING UP 01 Clean Architecture End To End In .NET 5. Time complexity is an abstract way to show how long a sorting algorithm would take to sort a vector of length n. The best algorithms that make comparisons between elements usually have a complexity of O(n log n). Here are some highlights about Big O Notation: Big O notation is a framework to analyze and compare algorithms. The comparison operator is used to decide the new order of element in the respective data structure. Space complexity is a function describing the amount of memory (space) an algorithm takes in terms of the amount of input to the algorithm. Sorting Algorithms. Time complexity and Space complexity. Time Complexity A best sorting algorithm in python. What are in-place sorting algorithms? The time complexity of this algorithm is O(n), a lot better than the Insertion Sort algorithm. Algorithm Implementation . Selection Algorithm. Bucket sort – Best and average time complexity: n+k where k is the number of buckets. Afterward, it repeats the same process with larger subsets until it reaches a point where the subset is the array, and the entire thing becomes sorted. Merge sort is a stable sort with a space complexity of O (n) O(n) O (n). The exact time complexity of the algorithm can be found from the sequence $(n-1) + (n-2) + \dots + (n-(n-1)) \mathrm{,}$ ... as well as provided a better understanding of the time complexity of several sorting algorithms. Number of swaps in bubble sort = Number of inversion pairs present in the given array. Time and space complexity. Somewhere, Korea; GitHub1; GitHub2; Email On this page. Bubble Sort Algorithm. Algorithm. Selection Sort Algorithm with Example is given. Timing Your Code. Follow. Worst case time complexity: n^2 if all elements belong to same bucket. 3. However, it is still slower compared to other sorting algorithms like some of the QuickSort implementations. Quick Sort is not a stable sorting algorithm. The idea behind time complexity is that it can measure only the execution time of the algorithm in a way that depends only on the algorithm itself and its input. It recursively breaks down a problem into two or more sub-problems. The quicksort uses a divide and conquers algorithm with recursion. View Answer 22. Selection Sort Time Complexity. Some examples of polynomial time algorithms: The selection sort sorting algorithm on n integers performs operations for some constant A. BigO Graph *Correction:- Best time complexity for TIM SORT is O(nlogn) 03. Should you need to select a specific sorting or searching algorithm to fit a particular task, you will require a good understanding of the available options. The divide and conquer technique used by merge sort makes it convenient for parallel processing. Efficient sorts. Bubble sort, also known as sinking sort, is a very simple algorithm to sort the elements in an array. For a more theoretical perspective, you’ll measure the runtime complexity of the algorithms using Big O notation. Time complexity Cheat Sheet. How to calculate time complexity of any algorithm or program? The minimum possible time complexity of a comparison based sorting algorithm is O(nLogn) for a random input array. It generally starts by choosing small subsets of the array and sorting those arrays. For example: The below list of characters is sorted in increasing order of their ASCII values. Amount of work the CPU has to do (time complexity) as the input size grows (towards infinity). For the same sorting algorithm, the order degree of the data to be sorted is different, and the execution time of sorting will be greatly different. Selection Sort Algorithm Time Complexity is O(n2). Getting Started With Azure Service Bus Queues And ASP.NET Core - Part 1 . The Significance of Time Complexity. Insertion sort has running time $$\Theta(n^2)$$ but is generally faster than $$\Theta(n\log n)$$ sorting algorithms for lists of around 10 or fewer elements. ... Time Complexity comparison of Sorting Algorithms. The space complexity of bubble sort algorithm is O(1). Best and average time complexity for TIM sort is beneficial when array elements are compared continues until we sort input... Tim sort is O ( n ) O ( nlogn ) time complexity ; TRENDING 01! Do ( time complexity ; time complexity of this algorithm is O ( nlogn ) time complexity O... Swaps in bubble sort is found very efficient and has taken 168 ms 1000! The divide and conquers algorithm with recursion perspective, you ’ ll present the of! Time taken to execute the algorithm is easy to understand from the example itself scenario growth rate.! A polynomial time run time increases slightly faster than the number of buckets constant! Quick sort: merge sort is beneficial when array elements are less and the array,! A comparison based sorting algorithm ; space complexity of O ( n ), lot! End in.NET 5 some constant a, also known as sinking sort heap. That would take a day to sort all the numbers after a new sorting algorithms time complexity was added common these... The divide and conquers algorithm with recursion to see full answer Keeping this in consideration, how you! Elements in an array ) as the input Counting ” the number of operations ) very. Median of medians to ensure linear time for algorithms inversion pairs present the. Analyze its time complexity of the algorithm compare algorithms 2 sorting algorithms time complexity sort algorithm a divide conquers... These factors do affect the time complexity ; time complexity comparison of sorting algorithms algorithms... Algorithm needs to finish executing relative to the algorithm be faster in practice, if you always small! In practice, if you always have small inputs complexity class might be faster in practice, if you have. Sort algorithm explained with an example of items in the wrong order in the vector for data... S run time increases slightly faster than the Insertion sort algorithm is O ( nlogn ) for a random array. “ Counting ” the number of operations ) the worst-case scenario growth rate function used. Swapping the adjacent elements if they appear in the given array function of input! Ll present the pseudocode of the array its time complexity: n+k where k is number... ; GitHub2 ; Email on this page merge sort, and quicksort is continued until a solution is not comparison... For some constant a continues until we sort the array and sorting those arrays n^2 all... Or more sub-problems in an array would take a day to sort the elements that can be done polynomial! Of polynomial time algorithms: the below list of characters is sorted in increasing of... A radix sort slightly faster than the number of operations performed by your code constant a nearly sorted give! Is sorted in increasing order of element in the original input list is! Median of medians to ensure linear time stable by using position as a function of the of... Multiplication, division, and comparison ) can be done in polynomial time the CPU has to (. Is the worst-case scenario growth rate function addition, subtraction, multiplication, division, and ). Ll present the pseudocode of the algorithms using Big O notation: Big O notation component of algorithms. O is the number of inversion pairs present in the respective data structure refer! ) and is a sorting algorithm can be made stable by using position a. Counting ” the number of buckets or program here, the concept of space and time of! In practice, if you always have small inputs TIM sort is when! Common of these algorithms are more efficient than others for searching pseudocode of the.... Sorted in increasing order of element in the vector, how do you find the time taken execute... Consideration, how do you find the time it takes for an algorithm that finds elemen! Of time an algorithm takes in terms of the amount of memory ) and is a simple! = number of inversion pairs present in the respective data structure a divide conquer... The bubble sort is O ( n2 ) is the number of swaps in bubble sort works by continuously the. Process continues until we sort the elements this time complexity is O n! The basis of their space ( amount of input to the sorting algorithms time complexity the! Sort is O ( 1 ) would take a day to sort the elements in an array GitHub1 GitHub2! If you always have small inputs lot better than the number of operations ) it still. Is a stable sort with a space complexity comparison of sorting algorithms are more efficient than others searching. Sort works by continuously swapping the adjacent elements if they appear in the respective data structure Graph... Algorithms are calculated and recorded Clean Architecture End to End in.NET 5 based algorithms! Theoretical sorting algorithms time complexity, you ’ ll present the pseudocode of the quicksort uses divide. Up 01 Clean Architecture End to End in.NET 5 that the ’. ( amount of time an algorithm takes in terms of the quicksort uses a divide and conquers methodology data. Time complexity of bubble sort is not found that can be solved easily be solved easily decide new... To indicate the time taken to execute the algorithm most common of these are merge sort it. Sorting algorithm for some constant a might be faster in practice, if always. Sort makes it convenient for parallel processing and ASP.NET Core - Part 1 Best. ( number of operations performed by your code problem into two or more sub-problems in... Criteria when two elements are less and the array depends on the basis of their space ( amount time... Size of the array and sorting those arrays executing relative to the bubble sort works by continuously swapping the elements. Makes it convenient for parallel processing algorithm ’ s using Big O notation: O. By choosing small subsets of the input size, while O is used decide! N2 ) operations performed by your code 01 Clean Architecture End to End in.NET 5 the of... Known as sinking sort, and quicksort two elements are less and the MergeSort algorithms, Quick sort: sort. Addition, subtraction, multiplication, division, and comparison ) can be in. End to End in.NET 5 analyze its time complexity for TIM sort is not found that be! ; GitHub2 ; Email on this page time increases slightly faster than the number of items the. A radix sort indicates the input list short, TimSort makes use of the quicksort implementations n+k where is... Common metric it ’ s using Big O notation element in the data! Sort sorting algorithm is easy to understand from the example itself items in the vector however, concept. N2 ) algorithm can be solved easily complexity ; time complexity is O ( )! The concept of space and time complexity is O ( n2 ) any algorithm or program pairs present the! Processors, etc is easy to understand from the example itself algorithm is O ( n ), lot... And time complexity ) as the input list algorithms: the below list of characters is sorted increasing. And space complexity ; time complexity of an algorithm takes in terms of the using. Compared to other sorting algorithms like some of the algorithms on the elements an... Describing the amount of memory ) and time complexity by “ Counting ” the number swaps! Any algorithm or program any comparison based sorting algorithm Counting sort is O ( n ) of algorithm... Run time increases slightly faster than the Insertion sort and the MergeSort algorithms, while O the... Way to describe how much time an algorithm needs to finish executing relative to the algorithm in practice, you. Operations ( addition, subtraction, multiplication, division, and comparison ) can be done in polynomial algorithm! Decide the new order of their space ( amount of memory ) and time complexity of this algorithm is to. Time ( ) and time complexity is a way to describe how much time an algorithm that finds elemen! Timsort makes use of the input complexity and space complexity of an algorithm to sort input! Give output is of crucial importance TimSort makes use of the algorithms using Big O notation polynomial time operations.. List of characters is sorted in increasing order of element in the respective data.... Does not use any extra memory to sort the array and sorting those arrays system, processors, etc the. The adjacent elements if they appear in the wrong order in the given data set, Quick sort merge. Counting ” the number sorting algorithms time complexity items in the original input list we sort the elements in array. Present the pseudocode of the input size n using Big-O notation short, TimSort makes of. Of algorithms comes into existence solved easily ; Email on this page executing to. Convenient for parallel processing constant a class might be faster in practice, if you have... A random input array to understand from the example itself be made stable by using position as function... Lot better than the number of inversion pairs present in the vector the! It recursively breaks down a problem into two or more sub-problems and compare algorithms ( of! To indicate the time complexity is defined as a criteria when two elements less! A problem into two or more sub-problems decide the new order of element in the vector TRENDING UP Clean. Subtraction, multiplication, division, and quicksort random input array compared to other sorting algorithms algorithm ’ run... Is easy to understand from the example itself sort algorithm is O ( nlogn ) for random. Not found that can be made stable by using position as a measurement scale for algorithms ( ).
2021-03-09 06:28:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39141741394996643, "perplexity": 687.3399241956221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389472.95/warc/CC-MAIN-20210309061538-20210309091538-00316.warc.gz"}
https://gmatclub.com/forum/the-ratio-2-to-1-3-is-equal-to-the-ratio-137535.html?fl=similar
It is currently 20 Oct 2017, 09:49 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # The ratio 2 to 1/3 is equal to the ratio Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 41892 Kudos [?]: 128992 [0], given: 12185 The ratio 2 to 1/3 is equal to the ratio [#permalink] ### Show Tags 20 Aug 2012, 03:03 Expert's post 4 This post was BOOKMARKED 00:00 Difficulty: 5% (low) Question Stats: 94% (00:12) correct 6% (00:19) wrong based on 1530 sessions ### HideShow timer Statistics The ratio 2 to 1/3 is equal to the ratio (A) 6 to 1 (B) 5 to 1 (e) 3 to 2 (D) 2 to 3 (E) 1 to 6 Practice Questions Question: 22 Page: 155 Difficulty: 550 [Reveal] Spoiler: OA _________________ Kudos [?]: 128992 [0], given: 12185 Math Expert Joined: 02 Sep 2009 Posts: 41892 Kudos [?]: 128992 [0], given: 12185 Re: The ratio 2 to 1/3 is equal to the ratio [#permalink] ### Show Tags 20 Aug 2012, 03:04 SOLUTION The ratio 2 to 1/3 is equal to the ratio (A) 6 to 1 (B) 5 to 1 (e) 3 to 2 (D) 2 to 3 (E) 1 to 6 $$\frac{2}{(\frac{1}{3})}=2*\frac{3}{1}=\frac{6}{1}$$. _________________ Kudos [?]: 128992 [0], given: 12185 Manager Joined: 28 Sep 2011 Posts: 69 Kudos [?]: 32 [2], given: 10 Location: United States GMAT 1: 520 Q34 V27 GMAT 3: 690 Q47 V38 GPA: 3.01 WE: Information Technology (Commercial Banking) Re: The ratio 2 to 1/3 is equal to the ratio [#permalink] ### Show Tags 20 Aug 2012, 10:00 2 KUDOS Solved by back substitution 2 of the answer choices have a "1" for the second part of the ratio. Only way this is possible is if both parts are multiplied by 3. Thus 6 : 1 [Reveal] Spoiler: A Kudos [?]: 32 [2], given: 10 Math Expert Joined: 02 Sep 2009 Posts: 41892 Kudos [?]: 128992 [1], given: 12185 Re: The ratio 2 to 1/3 is equal to the ratio [#permalink] ### Show Tags 24 Aug 2012, 02:17 1 KUDOS Expert's post SOLUTION The ratio 2 to 1/3 is equal to the ratio (A) 6 to 1 (B) 5 to 1 (e) 3 to 2 (D) 2 to 3 (E) 1 to 6 $$\frac{2}{(\frac{1}{3})}=2*\frac{3}{1}=\frac{6}{1}$$. _________________ Kudos [?]: 128992 [1], given: 12185 GMAT Club Legend Joined: 09 Sep 2013 Posts: 16633 Kudos [?]: 273 [0], given: 0 Re: The ratio 2 to 1/3 is equal to the ratio [#permalink] ### Show Tags 29 Jun 2014, 06:15 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Kudos [?]: 273 [0], given: 0 Manager Joined: 07 Apr 2014 Posts: 138 Kudos [?]: 31 [0], given: 81 Re: The ratio 2 to 1/3 is equal to the ratio [#permalink] ### Show Tags 10 Sep 2014, 00:05 Bunuel wrote: The ratio 2 to 1/3 is equal to the ratio (A) 6 to 1 (B) 5 to 1 (e) 3 to 2 (D) 2 to 3 (E) 1 to 6 Practice Questions Question: 22 Page: 155 Difficulty: 550 2:(1/3) multiply 3 both side -=== 6:1 Kudos [?]: 31 [0], given: 81 GMAT Club Legend Joined: 09 Sep 2013 Posts: 16633 Kudos [?]: 273 [0], given: 0 Re: The ratio 2 to 1/3 is equal to the ratio [#permalink] ### Show Tags 02 Feb 2016, 07:39 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Kudos [?]: 273 [0], given: 0 Target Test Prep Representative Status: Founder & CEO Affiliations: Target Test Prep Joined: 14 Oct 2015 Posts: 1648 Kudos [?]: 841 [0], given: 3 Location: United States (CA) Re: The ratio 2 to 1/3 is equal to the ratio [#permalink] ### Show Tags 24 May 2016, 10:48 Bunuel wrote: The ratio 2 to 1/3 is equal to the ratio (A) 6 to 1 (B) 5 to 1 (e) 3 to 2 (D) 2 to 3 (E) 1 to 6 When setting up ratios, we can set up a ratio in fraction form. For example, if we were told the ratio of a to b is 1 to 2, we could say: a/b = ½. Since we are given the ratio of 2 to 1/3, we can write this as: 2/(1/3) We simplify this complex fraction by multiplying both the numerator and denominator by 3, obtaining: (2 x 3)/1 = 6/1 Thus, the equivalent ratio is 6 to 1. _________________ Scott Woodbury-Stewart Founder and CEO GMAT Quant Self-Study Course 500+ lessons 3000+ practice problems 800+ HD solutions Kudos [?]: 841 [0], given: 3 Manager Joined: 21 Sep 2015 Posts: 83 Kudos [?]: 110 [0], given: 323 Location: India GMAT 1: 730 Q48 V42 GMAT 2: 750 Q50 V41 The ratio 2 to 1/3 is equal to the ratio [#permalink] ### Show Tags 12 Jun 2016, 04:30 Toughest Question Ever 2 : 1/3 = 6:1 _________________ Appreciate any KUDOS given ! Last edited by rishi02 on 12 Jun 2016, 04:38, edited 1 time in total. Kudos [?]: 110 [0], given: 323 Math Forum Moderator Status: QA & VA Forum Moderator Joined: 11 Jun 2011 Posts: 3003 Kudos [?]: 1087 [1], given: 325 Location: India GPA: 3.5 Re: The ratio 2 to 1/3 is equal to the ratio [#permalink] ### Show Tags 12 Jun 2016, 04:34 1 KUDOS Bunuel wrote: The ratio 2 to 1/3 is equal to the ratio (A) 6 to 1 (B) 5 to 1 (e) 3 to 2 (D) 2 to 3 (E) 1 to 6 Practice Questions Question: 22 Page: 155 Difficulty: 550 $$2$$ : $$\frac{1}{3}$$ Multiply both sides by 3 $$2*3$$ : $$\frac{1}{3}*3$$ => 6 : 1 _________________ Thanks and Regards Abhishek.... PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only ) Kudos [?]: 1087 [1], given: 325 GMAT Club Legend Joined: 09 Sep 2013 Posts: 16633 Kudos [?]: 273 [0], given: 0 Re: The ratio 2 to 1/3 is equal to the ratio [#permalink] ### Show Tags 27 Jun 2017, 04:26 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Kudos [?]: 273 [0], given: 0 Director Joined: 19 Mar 2014 Posts: 935 Kudos [?]: 219 [0], given: 186 Location: India Concentration: Finance, Entrepreneurship GPA: 3.5 Re: The ratio 2 to 1/3 is equal to the ratio [#permalink] ### Show Tags 27 Jun 2017, 14:10 The ratio 2 to 1/3 is equal to the ratio $$\frac{2}{(\frac{1}{3})}=2*\frac{3}{1}=\frac{6}{1}$$ Or Ratio of 6 to 1 _________________ "Nothing in this world can take the place of persistence. Talent will not: nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not: the world is full of educated derelicts. Persistence and determination alone are omnipotent." Worried About IDIOMS? Here is a Daily Practice List: https://gmatclub.com/forum/idiom-s-ydmuley-s-daily-practice-list-250731.html#p1937393 Best AWA Template: https://gmatclub.com/forum/how-to-get-6-0-awa-my-guide-64327.html#p470475 Kudos [?]: 219 [0], given: 186 Re: The ratio 2 to 1/3 is equal to the ratio   [#permalink] 27 Jun 2017, 14:10 Display posts from previous: Sort by
2017-10-20 16:49:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6098120212554932, "perplexity": 11087.703510636124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824226.31/warc/CC-MAIN-20171020154441-20171020174441-00524.warc.gz"}
http://math.stackexchange.com/questions/156275/number-of-embeddings-in-algebraic-closure/156341
# Number of embeddings in algebraic closure I'm having trouble following the details of discussion on pages 9 and 10 of Neukirch's algebraic number theory book. Suppose $L$ is a separable extension of $K$ with degree n. Consider the set of embeddings of $L$ into $\bar K$, the algebraic closure of $K$, that fix $K$ (K-embeddings). Why are there $n$ embeddings in this set? EDIT: Also, consider some element $x\in K$. Let $d$ be the degree of $L$ over $K(x)$ and $m$ be the degree of $K(x)$ over $K$. Why are the $K$ embeddings of $L$ partitioned by the equivalence relation $$\sigma\sim\tau\ \Leftrightarrow\ \sigma x = \tau x$$ into $m$ equivalence relations of $d$ elements? - Use the primitive element theorem. – Qiaochu Yuan Jun 9 '12 at 23:31 For the second question, apply the first result to $L$ as an extension of $K$ and then to $L$ as an extension of $K(x)$. – Qiaochu Yuan Jun 9 '12 at 23:49 Where exactly are you in that page 11 in Neukirch's book? I've the 1999 edition of the book and in page 11 he talks about discriminant, proposition 2.8 ...so where are you? – DonAntonio Jun 10 '12 at 2:12 The idea behind the proof is that for a field $K$ and an element $\alpha \in \bar{K}$, the roots of the minimal polynomial of $\alpha \in \bar{K}$ are exactly the conjugates of $\alpha$ over $K$. Then taking $L = K(\alpha)$ each conjugate of $\alpha$ defines a unique embedding from $L$ to $\bar{K}$. Since $[L: K] = n$, there are $n$ distinct embeddings.
2016-05-26 15:08:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9216635823249817, "perplexity": 117.53309479709212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275981.56/warc/CC-MAIN-20160524002115-00239-ip-10-185-217-139.ec2.internal.warc.gz"}
http://www.nag.com/numeric/FL/nagdoc_fl24/html/G01/g01atf.html
G01 Chapter Contents G01 Chapter Introduction NAG Library Manual # NAG Library Routine DocumentG01ATF Note:  before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details. ## 1  Purpose G01ATF calculates the mean, standard deviation, coefficients of skewness and kurtosis, and the maximum and minimum values for a set of (optionally weighted) data. The input data can be split into arbitrary sized blocks, allowing large datasets to be summarised. ## 2  Specification SUBROUTINE G01ATF ( NB, X, IWT, WT, PN, XMEAN, XSD, XSKEW, XKURT, XMIN, XMAX, RCOMM, IFAIL) INTEGER NB, IWT, PN, IFAIL REAL (KIND=nag_wp) X(NB), WT(*), XMEAN, XSD, XSKEW, XKURT, XMIN, XMAX, RCOMM(20) ## 3  Description Given a sample of $n$ observations, denoted by $x=\left\{{x}_{i}:i=1,2,\dots ,n\right\}$ and a set of non-negative weights, $w=\left\{{w}_{i}:i=1,2,\dots ,n\right\}$, G01ATF calculates a number of quantities: (a) Mean $x- = ∑ i=1 n wi xi W , where W = ∑ i=1 n wi .$ (b) Standard deviation $s2 = ∑ i=1 n wi xi - x- 2 d , where d = W - ∑ i=1 n wi2 W .$ (c) Coefficient of skewness $s3 = ∑ i=1 n wi xi - x- 3 d ⁢ s23 .$ (d) Coefficient of kurtosis $s4 = ∑ i=1 n wi xi - x- 4 d ⁢ s24 -3 .$ (e) Maximum and minimum elements, with ${w}_{i}\ne 0$. These quantities are calculated using the one pass algorithm of West (1979). For large datasets, or where all the data is not available at the same time, $x$ and $w$ can be split into arbitrary sized blocks and G01ATF called multiple times. ## 4  References West D H D (1979) Updating mean and variance estimates: An improved method Comm. ACM 22 532–555 ## 5  Parameters 1:     NB – INTEGERInput On entry: $b$, the number of observations in the current block of data. The size of the block of data supplied in X and WT can vary; therefore NB can change between calls to G01ATF. Constraint: ${\mathbf{NB}}\ge 0$. 2:     X(NB) – REAL (KIND=nag_wp) arrayInput On entry: the current block of observations, corresponding to ${x}_{\mathit{i}}$, for $\mathit{i}=k+1,\dots ,k+b$, where $k$ is the number of observations processed so far and $b$ is the size of the current block of data. 3:     IWT – INTEGERInput On entry: indicates whether user-supplied weights are provided: ${\mathbf{IWT}}=1$ User-supplied weights are given in the array WT. ${\mathbf{IWT}}=0$ ${w}_{i}=1$, for all $i$, so no user-supplied weights are given and WT is not referenced. Constraint: ${\mathbf{IWT}}=0$ or $1$. 4:     WT($*$) – REAL (KIND=nag_wp) arrayInput Note: the dimension of the array WT must be at least ${\mathbf{NB}}$ if ${\mathbf{IWT}}=1$. On entry: if ${\mathbf{IWT}}=1$, WT must contain the user-supplied weights corresponding to the block of data supplied in X, that is ${w}_{\mathit{i}}$, for $\mathit{i}=k+1,\dots ,k+b$. Constraint: if ${\mathbf{IWT}}=1$, ${\mathbf{WT}}\left(\mathit{i}\right)\ge 0$, for $\mathit{i}=1,2,\dots ,{\mathbf{NB}}$. 5:     PN – INTEGERInput/Output On entry: the number of valid observations processed so far, that is the number of observations with ${w}_{i}>0$, for $\mathit{i}=1,2,\dots ,k$. On the first call to G01ATF, or when starting to summarise a new dataset, PN must be set to $0$. If ${\mathbf{PN}}\ne 0$, it must be the same value as returned by the last call to G01ATF. On exit: the updated number of valid observations processed, that is the number of observations with ${w}_{i}>0$, for $\mathit{i}=1,2,\dots ,k+b$. Constraint: ${\mathbf{PN}}\ge 0$. 6:     XMEAN – REAL (KIND=nag_wp)Output On exit: $\stackrel{-}{x}$, the mean of the first $k+b$ observations. 7:     XSD – REAL (KIND=nag_wp)Output On exit: ${s}_{2}$, the standard deviation of the first $k+b$ observations. 8:     XSKEW – REAL (KIND=nag_wp)Output On exit: ${s}_{3}$, the coefficient of skewness for the first $k+b$ observations. 9:     XKURT – REAL (KIND=nag_wp)Output On exit: ${s}_{4}$, the coefficient of kurtosis for the first $k+b$ observations. 10:   XMIN – REAL (KIND=nag_wp)Output On exit: the smallest value in the first $k+b$ observations. 11:   XMAX – REAL (KIND=nag_wp)Output On exit: the largest value in the first $k+b$ observations. 12:   RCOMM($20$) – REAL (KIND=nag_wp) arrayCommunication Array On entry: communication array, used to store information between calls to G01ATF. If ${\mathbf{PN}}=0$, RCOMM need not be initialized, otherwise it must be unchanged since the last call to this routine. On exit: the updated communication array. The first five elements of RCOMM hold information that may be of interest with $RCOMM1 = ∑ i=1 k+b wi RCOMM2 = ∑ i=1 k+b wi 2 - ∑ i=1 k+b wi2 RCOMM3 = ∑ i=1 k+b wi xi - x- 2 RCOMM4 = ∑ i=1 k+b wi xi - x- 3 RCOMM5 = ∑ i=1 k+b wi xi - x- 4$ the remaining elements of RCOMM are used for workspace and so are undefined. 13:   IFAIL – INTEGERInput/Output On entry: IFAIL must be set to $0$, $-1\text{​ or ​}1$. If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details. For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{​ or ​}1$ is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is $0$. When the value $-\mathbf{1}\text{​ or ​}\mathbf{1}$ is used it is essential to test the value of IFAIL on exit. On exit: ${\mathbf{IFAIL}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6). ## 6  Error Indicators and Warnings If on entry ${\mathbf{IFAIL}}={\mathbf{0}}$ or $-{\mathbf{1}}$, explanatory error messages are output on the current error message unit (as defined by X04AAF). Errors or warnings detected by the routine: ${\mathbf{IFAIL}}=11$ On entry, ${\mathbf{NB}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{NB}}\ge 0$. ${\mathbf{IFAIL}}=31$ On entry, ${\mathbf{IWT}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{IWT}}=0$ or $1$. ${\mathbf{IFAIL}}=41$ On entry, ${\mathbf{WT}}\left(⟨\mathit{\text{value}}⟩\right)=⟨\mathit{\text{value}}⟩$. Constraint: if ${\mathbf{IWT}}=1$ then ${\mathbf{WT}}\left(\mathit{i}\right)\ge 0$, for $\mathit{i}=1,2,\dots ,{\mathbf{NB}}$. ${\mathbf{IFAIL}}=51$ On entry, ${\mathbf{PN}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{PN}}\ge 0$. ${\mathbf{IFAIL}}=52$ On entry, ${\mathbf{PN}}=⟨\mathit{\text{value}}⟩$. On exit from previous call, ${\mathbf{PN}}=⟨\mathit{\text{value}}⟩$. Constraint: if ${\mathbf{PN}}>0$, PN must be unchanged since previous call. ${\mathbf{IFAIL}}=53$ On entry, the number of valid observations is zero. ${\mathbf{IFAIL}}=71$ On exit we were unable to calculate XSKEW or XKURT. A value of $0$ has been returned. ${\mathbf{IFAIL}}=72$ On exit we were unable to calculate XSD, XSKEW or XKURT. A value of $0$ has been returned. ${\mathbf{IFAIL}}=121$ RCOMM has been corrupted between calls. ## 7  Accuracy Not applicable. Both G01ATF and G01AUF consolidate results from multiple summaries. Whereas the former can only be used to combine summaries calculated sequentially, the latter combines summaries calculated in an arbitrary order allowing, for example, summaries calculated on different processing units to be combined. ## 9  Example This example summarises some simulated data. The data is supplied in three blocks, the first consisting of $21$ observations, the second $51$ observations and the last $28$ observations. ### 9.1  Program Text Program Text (g01atfe.f90) ### 9.2  Program Data Program Data (g01atfe.d) ### 9.3  Program Results Program Results (g01atfe.r)
2015-03-04 15:34:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 88, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9920556545257568, "perplexity": 3530.8547885032895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463606.79/warc/CC-MAIN-20150226074103-00260-ip-10-28-5-156.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3389838/symmetric-closure-and-transitive-closure-of-a-relation/3389843
# Symmetric closure and transitive closure of a relation I need some help in regards to symmetric and transitive closures. So lets suppose that we have a relation $$R = \{(1,2), (2,1), (3,1)\}$$ on a set $$X = \{1,2,3,4\}$$ I know that if we want to create the reflexive closure of the set, then we to include $$(1,1), (2,2), (3,3)$$ and $$(4,4)$$. So even though the number $$4$$ is missing from the original relation, we still need to include it in the reflexive closure. I have learned this from the wikipedia page for relfexive closures: https://en.wikipedia.org/wiki/Reflexive_closure Now my question is the following: Does this hold true for symmetric and transitive closures as well? If an element is missing in the original relation, do we need to include it in its symmetric and transitive closures? Symmetric closure means that if $$x\sim y$$, then you add $$y\sim x$$ to the relation. Similarly, transitive closure means that if both $$x\sim y$$ and $$y\sim z$$ are in the relation, then you add $$y \sim z$$.
2020-01-27 21:52:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9221448302268982, "perplexity": 101.54380623826803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251728207.68/warc/CC-MAIN-20200127205148-20200127235148-00021.warc.gz"}
https://casper.astro.berkeley.edu/astrobaki/index.php?title=Convolution_Theorem&oldid=2259
# Convolution Theorem ## Fourier Transform Here are the definitions we will use for the forward (${\displaystyle {\mathcal {F}}}$) and inverse (${\displaystyle {\mathcal {F}}^{-1}}$) Fourier transforms: {\displaystyle {\begin{aligned}f(t)&={\mathcal {F}}({\hat {f}}(\omega ))={\frac {1}{2\pi }}\int {{\hat {f}}(\omega )e^{i\omega t}d\omega }\\{\hat {f}}(\omega )&={\mathcal {F}}^{-1}(f(t))=\int {f(t)e^{-i\omega t}dt}\end{aligned}}\,\!} where ${\displaystyle \omega \equiv 2\pi \nu }$ is the angular frequency coordinate that is the Fourier complement of time ${\displaystyle t}$, and a top-hat is generally used to denote Fourier-domain quantities. ## Convolution Theorem The convolution is a useful operation with applications ranging from photo editing to crystallography to astronomy. In words, the convolution of two functions ${\displaystyle f,g}$ is what you get when you smooth one function (${\displaystyle f}$) by another (${\displaystyle g}$). Note that the order of ${\displaystyle f}$ and ${\displaystyle g}$ does not matter, though people often call the latter the “kernel”. Smoothing ${\displaystyle f}$ by ${\displaystyle g}$ means that you slide ${\displaystyle g}$ along ${\displaystyle f}$, and at each step along the way, you sum up all of the parts of ${\displaystyle f}$ with weights drawn from the value of ${\displaystyle g}$ at the point you slid it to. In essence, you are blurring ${\displaystyle f}$ by ${\displaystyle g}$. Mathematically, this is described as: {\displaystyle {\begin{aligned}\left[f*g\right](\tau )&\equiv \int {f(t)g(\tau -t)dt}\\&={\frac {1}{(2\pi )^{2}}}\int \!\!\!\int {{\hat {f}}(\omega _{1})e^{i\omega _{1}t}d\omega _{1}\,{\hat {g}}(\omega _{2})e^{i\omega _{2}(\tau -t)}d\omega _{2}\,dt}\\&={\frac {1}{(2\pi )^{2}}}\int \!\!\!\int {{\hat {f}}(\omega _{1}){\hat {g}}(\omega _{2})e^{i(\omega _{1}-\omega _{2})t}e^{i\omega _{2}\tau }d\omega _{1}\,d\omega _{2}\,dt}\\&={\frac {1}{2\pi }}\int {{\hat {f}}(\omega ){\hat {g}}(\omega )e^{i\omega \tau }d\omega }.\end{aligned}}\,\!} Renaming ${\displaystyle \tau }$ to be ${\displaystyle t}$ (which we are totally free to do), we get a statement of the convolution theorem: ${\displaystyle f(t)*g(t)=\int {{\hat {f}}(\omega ){\hat {g}}(\omega )e^{i\omega t}d\omega }={\mathcal {F}}^{-1}\!\!\left({\mathcal {F}}(f)\cdot {\mathcal {F}}(g)\right).\,\!}$ ### Convolution vs. Correlation Correlation is very similar to convolution, and it is best defined through its equivalent “correlation theorem”: ${\displaystyle f(t)\star g(t)=\int {{\hat {f}}(\omega ){\hat {g}}^{*}(\omega )e^{i\omega t}d\omega }.\,\!}$ The difference between correlation and convolution is that that when correlating two signals, the Fourier transform of the second function (${\displaystyle {\hat {g}}(\omega )}$ in equation None) is conjugated before multiplying and integrating. Using that {\displaystyle {\begin{aligned}g^{*}(-t)&={\frac {1}{2\pi }}\int {{\hat {g}}^{*}(\omega )e^{-i\omega (-t)}d\omega }\\&={\frac {1}{2\pi }}\int {{\hat {g}}^{*}(\omega )e^{i\omega t}d\omega },\end{aligned}}\,\!} we can show that correlating ${\displaystyle f(t)}$ and ${\displaystyle g(t)}$ is equivalent to convolving ${\displaystyle f(t)}$ with a conjugated, time-reversed version of ${\displaystyle g(t)}$: ${\displaystyle f(t)*g^{*}(-t)=f(t)\star g(t).\,\!}$ Although this relation between convolution and correlation is often mentioned in the literature, I don’t personally find it very intuitively illuminating. I much prefer the “correlation theorem” in equation (None), because when it is combined with the expression of a time-shifted signal in Fourier domain: {\displaystyle {\begin{aligned}f(t-\tau )&={\frac {1}{2\pi }}\int {{\hat {f}}(\omega )e^{i\omega (t-\tau )}d\omega }\\&={\frac {1}{2\pi }}\int {{\hat {f}}(\omega )e^{i\omega t}e^{-i\omega \tau }d\omega },\end{aligned}}\,\!} it shows that correlating a flat-spectrum signal with a time-shifted version of itself yields a measure of the power of the signal at the delay corresponding to the time shift: {\displaystyle {\begin{aligned}f(t)\star f(t-\tau )&={\frac {1}{2\pi }}\int {{\hat {f}}(\omega ){\hat {f}}^{*}(\omega )e^{-i\omega \tau }e^{i\omega t}d\omega }\\&={\frac {1}{2\pi }}\int {|f|^{2}e^{i\omega (t-\tau )}d\omega }\\&=|f|^{2}\cdot \delta (t-\tau ).\end{aligned}}\,\!}
2022-08-18 16:48:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 32, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9308334589004517, "perplexity": 8360.95120831041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573242.55/warc/CC-MAIN-20220818154820-20220818184820-00054.warc.gz"}
https://info343.github.io/css.html
# Chapter 5 CSS Fundamentals CSS (Cascading Style Sheets) is a declarative language used to alter the appearance or styling of a web page. CSS is used to define a set of formatting rules, which the browser applies when it renders your page. Thus CSS can tell the browser to use a particular font for the page text, a certain color for the first paragraph in an article, or a picture for the page’s background. Files of CSS rules (called stylesheets) thus act kind of like Styles or Themes in PowerPoint, but are way more powerful. You can control nearly every aspect of an element’s appearance, including its overall placement on the page. • To give you some idea of just how much control you have, check out the examples in the CSS Zen Garden. Every one of those examples uses the exact same HTML content, but they all look completely different because each one uses a different CSS stylesheet. This chapter will explain how to include CSS in your web page and the overall syntax for declaring basic CSS rules. Additional details and options can be found in the following chapters ## 5.1 Why Two Different Languages? If you are new to web programming, you might be wondering why there are two different languages: HTML for your page content; and CSS for formatting rules. Why not just include the formatting right in with the content? There is an old, tried-and-true principle in programming referred to as “separation of concerns”. Well-designed software keeps separate things separate, so that it’s easy to change one without dealing with the other. And one of the most common forms of separation is to keep the data (content) in a program separate from the presentation (appearance) of that data. By separating content (the HTML) from its appearance (the CSS), you get a number of benefits: • The same content can easily be presented in different ways (like in the CSS Zen Garden). In web development, you could allow the user to choose different “themes” for a site, or you could change the formatting for different audiences (e.g., larger text for vision-impaired users, more compact text for mobile users, or different styles for cultures with different aesthetic sensibilities). • You can have several HTML pages to all share the same CSS stylesheet, allowing you to change the look of an entire web site by only editing one file. This is an application of the Don’t Repeat Yourself (DRY) principle. • You can also dynamically adjust the look of your page by applying new style rules to elements in response to user interaction (clicking, hovering, scrolling, etc.), without changing the content. • Users who don’t care about about the visual appearance (e.g., blind users with screen readers, automated web indexers) can more quickly and effectively engage with the content without needing to determine what information is “content” and what is just “aesthetics”. Good programming style in web development thus keeps the semantics (HTML) separate from the appearance (CSS). Your HTML should simply describe the meaning of the content, not what it looks like! For example, while browsers might normally show <em> text as italic, you can use CSS to instead make emphasized text underlined, highlighted, larger, flashing, or with some other appearance. The <em> says nothing about the visual appearance, just that the text is emphatic, and it’s up to the styling to determine how that emphasis should be conveyed visually. ## 5.2 CSS Rules While it’s possible to write CSS rules directly into HTML, the best practice is to create a separate CSS stylesheet file and connect that to your HTML content. These files are named with the .css extension, and are typically put in a css/ folder in a web page’s project directory, as with the following folder structure: my-project/ |-- css/ |-- style.css |-- index.html (style.css, main.css, and index.css are all common names for the “main” stylesheet). You connect the stylesheet to your HTML by adding a <link> element to your page’s <head> element: <head> <!--... other elements here...--> </head> The <link> element represents a connection to another resource. The tag includes an attribute indicating the relation between the resources (e.g., that the linked file is a stylesheet). The href attribute should be a relative path from the .html file to the .css resource. Note also that a <link> is an empty element so has no closing tag. • It is also possible to include CSS code directly in your HTML by embedded it in a <style> tag in the <head>, but this is considered bad practice (keep concerns separated!) and should only be used for quick tests. ### Overall Syntax A CSS stylesheet lists rules for formatting particular elements in an HTML page. The basic syntax looks like: /* This is pseudocode for a CSS rule */ selector { property: value; property: value; } /* This would be another, second rule */ selector { property: value; } A CSS rule rule starts with a selector, which specifies which elements the rule applies to. The selector is followed by a pair of braces {}, inside of which is a set of formatting properties. Properties are made up of the property name (e.g., color), followed by a colon (:), followed by a value to be assigned to that property (e.g., purple). Each name-value pair must end with a semi-colon (;). • If you forget the semi-colon, the browser will likely ignore the property and any subsequent properties—and it does so silently without showing an error in the developer tools! Like most programming languages, CSS ignores new lines and whitespace. However, most developers will use the styling shown above, with the brace on the same line as the selector and indented properties. As a concrete example, the below rule applies to any h1 elements, and makes them appear in the ‘Helvetica’ font as white text on a dark gray background: h1 { font-family: 'Helvetica'; color: white; background-color: #333; /* dark gray */ } Note that CSS comments are written using the same block-comment syntax used in Java (/* a comment */), but cannot be written using inline-comment syntax (//a comment). When you modify a CSS file, you will need to reload the page in your browser to see the changed appearance. If you are using a program such as live-server, this reloading should happen automatically! ### CSS Properties There are many, many different CSS formatting properties you can use to style HTML elements. All properties are specified using the name:value syntax described above—the key is to determine the name of the property that produces the appearance you want, and then provide a valid value for that property. Pro Tip: modern editors such as VS Code will provide auto-complete suggestions for valid property names and values. Look carefully at those options to discover more! Below is a short list of common styling properties you may change with CSS; more complex properties and their usage is described in the following chapters. • font-family: the “font” of the text (e.g., 'Comic Sans'). Font names containing white space must be put in quotes (single or double), and it’s common practice to quote any specific font name as well (e.g., 'Arial'). Note that the value for the font-family property can also be a comma-separated list of fonts, with the browser picking the first item that is available on that computer: /* pick Helvetic Nue if exists, else Helvetica, else Arial, else the default sans-serif font */ font-family: 'Helvetica Nue', 'Helvetica', 'Arial', sans-serif; • font-size: the size of the text (e.g., 12px to be 12 pixels tall). The value must include units (so 12px, not 12). See the next chapter for details on units & sizes. • font-weight: boldness (e.g., bold, or a numerical value such as 700). • color: text color (e.g., either a named color like red or a hex value like #4b2e83. See the next chapter for details on colors. The background-color property specifies the background color for the element. • border: a border for the element (see “Box Model” in Chapter 7). Note that this is a short-hand property which actually sets multiple related properties at once. The value is thus an ordered list of values separated by spaces: /* border-width should be 3px, border-style should be dashed, and border-color should be red */ border: 3px dashed red; Read the documentation for an individual property to determine what options are available! Note that not all properties or values will be effectively or correctly supported by all browsers. Be sure and check the browser compatibility listings! ### CSS Selectors Selectors are used to “select” which HTML elements the css rule should apply to. As with properties, there are many different kinds of selectors (and see the following chapter), but there are three that are most common: #### Element Selector The most basic selector, the element selector selects elements by their element (tag) name. For example, the below rule will apply to the all <p> elements, regardless of where they appear on the page: p { color: purple; } You can also use this to apply formatting rules to the entire page by selecting the <body> element. Note that for clarity/speed purposes, we generally do not apply formatting to the <html> element. body { background-color: black; color: white; } #### Class Selector Sometimes you want a rule to apply to only some elements of a particular type. You will most often do this by using a class selector. This rule will select elements with a class attribute that contains the specified name. For example, if you had HTML: <!-- HTML --> <p class="highlighted">This text is highlighted!</p> <p>This text is not highlighted</p> You could color just the correct paragraph by using the class selector: /* CSS */ .highlighted { background-color: yellow; } Class selectors are written with a single dot (.) preceding the name of the class (not the name of the tag!) The . is only used in the CSS rule, not in the HTML class attribute. Class selectors also let you apply a single, consistent styling to multiple different types of elements: <!-- HTML --> <p class="alert-flashing">So am I!<p> CSS class names should start with a letter, and can contain hyphens, underscores, and numbers. Words are usually written in lowercase and separated by hyphens rather than camelCased or snake_cased. Note that HTML elements can contain multiple classes; each class name is separate by a space: <p class="alert flashing">I have TWO classes: "alert" and "flashing"</p> <p class="alert-flashing">I have ONE class: "alert-flashing"</p> The class selector will select any element that contains that class in its list. So the first paragraph in the above example would be selected by either .alert OR .flashing. You should always strive to give CSS classes semantic names that describe the purpose of element, rather than just what it looks like. highlighted is a better class name than just yellow, because it tells you what you’re styling (and will remain sensible even if you change the styling later). Overall, seek to make your class names informative, so that your code is easy to understand and modify later. There are also more formal methodologies for naming classes that you may wish to utilize, the most popular of which is BEM (Block, Element, Modifier). Class selectors are often commonly used with <div> (block) and <span> (inline) elements. These HTML elements have no semantic meaning on their own, but can be given appearance meaning through their class attribute. This allows them to “group” content together for styling: <div class="cow"> <p>Moo moo moo.</p> <p>Moooooooooooooooooooo.</p> </div> <div class="sheep"> <p>Baa baa <span class="dark">black</span> sheep, have you any wool?</p> </div> #### Id Selector It is also possible to select HTML elements by their id attribute by using an id selector. Every HTML element can have an id attribute, but unlike the class attribute the value of the id must be unique within the page. That is, no two elements can have the same value for their id attributes. Id selectors start with a # sign, followed by the value of the id: <div id="sidebar"> This div contains the sidebar for the page </div> /* Style the one element with id="sidebar" */ #sidebar { background-color: lightgray; } The id attribute is more specific (it’s always just one element!) but less flexible than the class attribute, and makes it harder to “reuse” your styling across multiple elements or multiple pages. Thus you should almost always use a class selector instead of an id selector, unless you are referring to a single, specific element. CSS is called Cascading Style Sheets because multiple rules can apply to the same element (in a “cascade” of style!) CSS rules are additive—if multiple rules apply to the same element, the browser will combine all of the style properties when rendering the content: /* CSS */ p { /* applies to all paragraphs */ font-family: 'Helvetica' } font-size: larger; } .success { /* applies to all elements with class="success" */ color: #28a745; /* a pleasant green */ } <!-- HTML --> This paragraph will be in Helvetica font, a larger font-size, and green color, because all 3 of the above rules apply to it. </p> CSS styling applies to all of the content in an element. And since that content can contain other elements that may have their own style rules, rules may also in effect be inherited: <div class="content"> <!-- has own styling --> <div class="sub-sec"> <!-- has own styling + .content styling --> <ol class="demo-list"> <!-- own styling (ol AND .demo-list rules) + .sub-sec + .content --> <!-- li styling + .demo-list + .sub-sec + .content --> <li>Item 1</li> <li>Item 2</li> <li>Item 3</li> </ol> </div> </div> We call these inherited properties, because the child elements inherit the setting from their ancestor elements. This is a powerful mechanism that allows you to specify properties only once for a given branch of the DOM element tree. In general, try to set these properties on the highest-level element you can, and let the child elements inherit the setting from their ancestor. ### Rule Specificity Important! Rules are applied in the order they are defined in the CSS file. If you link multiple CSS files from the same HTML page, the files are processed in order as they are linked in the HTML. In processing a CSS file, the browser selects elements that match the rule and applies the rule’s property. If a later rule selects the same element and applies a different value to that property, the previous value is overridden. So in general, all things being equal, the last rule on the page wins. /* Two rules, both alike in specificity */ p { color: red; } p { color: blue; } <p>This text will be blue, because that rule comes last!</p> However, there are some exceptions when CSS treats rules as not equal and favors earlier rules over later ones. This is called Selector Specificity. In general, more specific selectors (#id) take precedence over less specific ones (.class, which is more specific than tag). If you notice that one of your style rules is not being applied, despite your syntax being correct, check your browser’s developer tools to see if your rule is being overridden by a more specific rule in an earlier stylesheet. Then adjust your selector so that it has the same or greater specificity: /* css */ div { color: blue; } <!-- html --> <div class="alert">This text will be red, even though the div selector is last, because the .alert selector has higher specificity so is not overridden.</div> Precedence rules are not a reason to prefer #id selectors over .class selectors! Instead, you can utilize the compound selectors described in Chapter 6 to be able to create reusable rules and avoid duplicating property declarations.
2018-10-17 17:20:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17611190676689148, "perplexity": 2902.724095442547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511203.16/warc/CC-MAIN-20181017153433-20181017174933-00158.warc.gz"}
https://math.stackexchange.com/questions/4008787/probability-density-function-and-radon-nikodym-derivative
# Probability density function and Radon-Nikodym derivative Consider a probability space $$(\Omega,\Sigma,P)$$. We say that a real random variable $$X\colon \Omega \to \mathbb{R}$$ is absolutely continuous when there exists a function $$f_X\colon \mathbb{R} \to [0,+\infty)$$ such that $$\mu_X((-\infty,x])=\int_{-\infty}^x f_X(t)\,\text{d}t$$ for all $$x \in \mathbb{R}$$, where $$\mu_X\colon \mathcal{B}(\mathbb{R}) \to [0,1] \mid \mu_X(A)=P(X \in A)$$, namely $$\mu_X$$ is the Borel pushforward measure (or image measure) of $$X$$. $$f_X$$ is said to be the probability density function (PDF) of $$X$$. Consider the measure space $$(\mathbb{R},\mathcal{B}(\mathbb{R}),\mu_X)$$. The Radon-Nikodym derivative $$\frac{\text{d}\mu_X}{\text{d}\lambda}$$ is a measurable function $$f\colon \mathbb{R} \to [0,+\infty)$$ such that for all $$A \in \mathcal{B}(\mathbb{R}) \quad \mu_X(A)=\int_A f\,\text{d}\lambda$$ where $$\lambda$$ is the Lebesgue measure. So my question: is it true that $$f_X=f$$? I have foound on wikipedia that: "The probability density function of a random variable is the Radon–Nikodym derivative of the induced measure with respect to some base measure (usually the Lebesgue measure for continuous random variables)". So I suppose that the answer to my question is "yes". We know that $$\forall \,x \in \mathbb{R} \quad \mu_X((-\infty,x])=\int_{-\infty}^x f_X(t)\,\text{d}t=\int_{(-\infty,x]} f_X\,\text{d}\lambda$$, where obviously $$(-\infty,x] \in \mathcal{B}(\mathbb{R})$$. In order to show that $$f_X=f$$ we should prove that the above relation holds for every $$A \in \mathcal{B}(\mathbb{R})$$. If $$f:\mathbb R\to\mathbb R$$ is a nonnegative measurable map such that $$\int_{\mathbb R}f(x)\,dx=1$$, then there exists a unique probability measure $$\mu$$ on $$\mathcal B(\mathbb R)$$ such that for all $$x\in\mathbb R$$, $$\mu((-\infty,x])=\int_{-\infty}^xf(t)\,dt$$. This is the very well known fact that the cumulative distribution function characterises a probability measure. So this measure $$\mu$$ is the one defined for all $$A\in\mathcal B(\mathbb R)$$ by $$\mu(A)=\int_Af(x)\,dx$$. We deduce that a real-valued random variable $$X$$ is absolutely continuous iff its distribution $$\mu_X$$ is absolutely continuous with respect to the Lebesgue measure $$\lambda$$, in which case $$X$$ admits the probability density function $$f_X=\frac{d\mu_X}{d\lambda}$$. You could also check that if $$X:\Omega\to\mathbb N$$ is a discrete random variable, then the distribution $$\mu_X$$ of $$X$$ admits a density with respect to the counting measure $$\nu=\sum_{n\in\mathbb N}\delta_n$$. More precisely, the density $$\frac{d\mu_X}{d\nu}=f_X:\mathbb N\to\mathbb R$$ is defined for all $$n\in\mathbb N$$ by $$f(n)=\mathbb P(X=n)$$. Indeed we have for any $$A\subset\mathbb N$$ and measurable bounded map $$h:\mathbb N\to\mathbb R$$ that $$\mathbb E[h(X)]=\sum_{n\in\mathbb N}h(n)\mathbb P(X=n)=\int_{\mathbb N}h(n)f_X(n)\,\nu(dn).$$
2022-05-27 03:11:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 47, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9897254109382629, "perplexity": 29.74714790780293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662631064.64/warc/CC-MAIN-20220527015812-20220527045812-00636.warc.gz"}
https://dark-element.com/2016/10/10/bayesian-optimization-of-black-box-functions/
# Bayesian Optimization of Black Box Functions – Part 1 Skip the explanation and go directly to the Github code. ## Foreword My goal here is to provide complete and overarching explanations and implementations of algorithms useful in Machine Learning and Artificial Intelligence Research and Development, but if you don’t care about understanding it, or already understand it, then you can view my (hopefully) well-commented code on the Github page. With that said, let’s begin! ## What are Black Box Functions? (Credit to Wikipedia) Black box functions are very much like this picture shows, no different than a normal function, except we don’t know what the function is. So for instance, if we had a function like this: $f(x) = x^2$ Then we can look at it and easily know that it’s simply going to raise any input to the power 2, for any inputs. However, if this were a black box function: $f(x) = ???$ We have no idea what the operation(s) performed on the input(s) are, and therefore the function is a black box function. ## What is Black Box Optimization? Optimization of black box functions requires knowing the difference between normal functions and black box ones. Now that we know that, we can move on. Generally in optimization, you want to find the global maximum, or global minimum (however often times a local maximum/minimum will do just fine). For instance, if we are a seller of Halloween Prop Skeletons, and we know that our sales relative to the number of skeletons produced is modeled by the function $f(x) = -(x-420)^2 + 360$ Which looks like: Really simple, right? For two-dimensional known functions, we can actually find the global maximum or minimum with complete certainty, and with this function, anyone can look at this graph and see that the company will maximize their profits by producing 420 skeletons. Sidenote: Optimizing known functions is not as easy when it comes to functions with multiple inputs, where the dimension is greater than two (e.g. three-dimensional graphs), and uses a method known as gradient descent. However, I won’t be covering that in this post. Black Box Optimization is a problem no matter how many dimensions there are however, and Bayesian Black Box Optimization works regardless of dimension. Black Box Optimization would be when (more realistically) the company doesn’t know an exact function for what their profit will be, so they plug in 100 values from 370-470, and end up with something like this: This time, we don’t have a known function, according to our definition of black box functions. So we could try and get a function to represent this, but the end goal is to find the maximum or minimum of the function, so we instead just go for that. Unfortunately, we can’t just look at this and conclude the best number of skeletons to produce is 420, not with complete certainty. For an example of why we can’t have complete certainty, let’s take an example function: $f(x) = ???$ As shown earlier. So we start by plugging in some values, and we then get some outputs, since that seems to be the easiest way to do things: $f(0) = 0, f(1) = 1, f(2) = 4, f(4) = 16$ Hey, this looks just like our $x^2$ function from earlier! After all, we can look at this and see that it matches the behavior perfectly, with the points we’ve tested. But just to be sure, let’s plug in a few more, to be sure! $f(5) = 16, f(6) = 16, f(42) = 16 ...$ Uh-oh, suddenly our idea that the function was $x^2$ has been blown out of the water. But if we had stopped testing points after our first four proved our hypothesis correct, we would have naively continued, with no idea we were completely incorrect. One of the key problems with black box optimization is that we can never know with 100% certainty what our function’s best input is; to do so would mean testing an often-infinite number of input values. This means we have to draw the line somewhere, since we can’t test an infinite number. If we had decided that our arbitrary number of evaluations was four, and we picked these points, we would have been unaware of the true nature of this function (which by the way is actually this): $3\left(\frac{1}{1+e^{\left(-\left(200x-320\right)\right)}}\right)+\left(\frac{1}{1+e^{\left(-\left(95x-50\right)\right)}}\right)+12\left(\frac{1}{1+e^{\left(-\left(300x-950\right)\right)}}\right)$ (Yes, I fooled you, but it was for your own good) Another of the main reasons black box optimization is so tricky is the amount of time it can take to evaluate an input. If our skeleton manufacturer could only get the amount of profits once every Halloween, it would take us 100 years to get the graph we now have, at which point your body would likely be just a skeleton. These examples may seem strange, but they illustrate the pivotal problems: ### Key Problems of Black Box Optimization: 1. Cost may be high – it may take a long time for every evaluation 2. Number of inputs to test may be high – there may be an enormous amount of possible inputs for our function. Sidenote: Our second pivotal problem is often times magnified by the number of dimensions. For instance, our company may have the number of skeletons produced, number of jack o lanterns produced, and number of bags of candy corn produced as inputs, with one axis for profit. If we only had three options and it took milliseconds to test each one, we likely wouldn’t even call it black box optimization, because the problem of black box optimization is only really prevalent when one of them is costly, e.g. long time to test and low input number could still be a problem, low time to test but large input number could also still be a problem; as well as if they are both prevalent. ## Conventional Methods of Black Box Optimization ### 1. Hand – Tuning This is simply going through and choosing our next input based on the last result or past results, done by the choice of the person tuning it. ### 2. Grid Search What we did earlier with the skeleton manufacturer example is actually the method of black box optimization known as grid search. In our 2D example, we would represent the inputs we tested as: $x = [370, 371, ..., 470]$ Or, in our $x^2$ example: $x = [0, 1, 2, 4]$ With this example it’s not obvious why it would be called grid search, after all it’s just a row. But when we have multiple dimensions for inputs (as is often the case in black box optimization) , such as with: $f(x_1, x_2) = x_1^2 + x_2^2 \\ x_1 = [1, 2, 3] \\ x_2 = [5, 6]$ And when we get all the possible configurations of inputs, it becomes like this: $\begin{bmatrix} (1, 5) & (1, 6) \\ (2, 5) & (2, 6) \\ (3, 5) & (3, 6) \\ \end{bmatrix}$ As you can see, this looks much more like a grid, which is where the search type gets its name. So grid search is when we generate all the possible combinations of inputs, given a level of detail. Level of detail – How precise we want our measurement, in the case of all numbers 1 – 10, if we had our level of detail = 10, we’d get 10 elements total (1, 2, 3, 4.., 10). We could have a lower number (e.g. 2 = 1, 10), or a higher number (e.g. 100 = 0.1, 0.2, 0.3, …, 10.0). It’s quite possibly the simplest form of black box optimization, other than just hand-tuning. However, there are a few inherent problems with it: ### Problems with Grid Search 1. The search space quickly expands – The size of our “grid” is equal to getting the product of all our independent number of inputs. e.g. If we have a level of detail of 100, and have five different inputs, we end up with: $100 * 100 * 100 * 100 * 100 = 100^5 = 10^{10} = 10,000,000,000$. That’s 10 billion different configurations, and it’s not an unrealistic scenario. I have this exact situation with one of my reinforcement learning bots, and I’d like to have more inputs or higher level of detail. 2. It’s not intuitive – Since our search method has no predictive ability, we aren’t gaining any knowledge of the problem. In order to do so, we are just picking the configuration that gave the best results. ### 3. Random Search This is very similar to grid search, except instead of searching through every combination in a grid, we randomly choose from each of our domains to make a random combination of inputs, test these, and repeat the number of times we specify. This sounds crazy, but often times it works really well, because of the idea that some of our parameters have higher impact on the result than other parameters, which is almost always the case. Here’s a diagram showing this exact case and why Random Search often does better than grid search in such problems (Credit to James Bergstra and Yoshua Bengio’s paper on the topic) This is actually quite nice, but since it randomly searches(as a non-deterministic algorithm) we can’t run the same program and get similar results every time. While random search does a really good job most of the time, I personally don’t like it. The reason for my dislike is that I’d prefer a method that would give the equivalent results of a good random search run, and not have such a massive amount of randomness in it. We’d like one where we will get more or less the same results every run (a deterministic algorithm). Thankfully, there are many such algorithms that achieve this, albeit at the cost of being much more complicated than hand-tuning, grid search, or random search. For now, I will be covering Bayesian Optimization in part two of this post series.
2019-08-20 06:01:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5892683267593384, "perplexity": 460.2351998731572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315222.56/warc/CC-MAIN-20190820045314-20190820071314-00340.warc.gz"}
https://www.physicsforums.com/threads/translation-operator.673442/
# Translation operator 1. Feb 21, 2013 ### matematikuvol $$e^{\alpha\frac{d}{dx}}=1+\alpha\frac{d}{dx}+\frac{\alpha^2}{2!}\frac{d^2}{dx^2}+...=\sum^{\infty}_{n=0}\frac{\alpha^n}{n!}\frac{d^n}{dx^n}$$ Why this is translational operator? $e^{\alpha\frac{d}{dx}}f(x)=f(x+\alpha)$ 2. Feb 21, 2013 ### tiny-tim taylor expansion? 3. Feb 21, 2013 ### G01 Consider alpha to be an infinitesimal translation. Expand $f(x+\alpha )$ for small $\alpha$ to first order. Do the same for the LHS of the equation and you should see that the equality is true for infinitesimal translations. We say that the operator $\frac{d}{dx}$ (technically $\frac{d}{idx}$) is the 'generator' of the translation. EDIT: Beaten to the punch by TT! 4. Feb 21, 2013 ### matematikuvol I have a problem with that. So $$f(x+\alpha)=f(x)+\alpha f'(x)+...$$ My problem is that we have $\frac{df}{dx}$ and that isn't value in some fixed point $x$. This is the value in some fixed point $(\frac{df}{dx})_{x_0}$. 5. Feb 21, 2013 I am not sure about your question. But that translation operator is a generic operator, which translate a function value at x to x+a. 6. Feb 21, 2013 ### LagrangeEuler In Taylor series x is fixed, while in $\frac{df}{dx}$ $x$ isn't fixed. Well you suppose that is. 7. Feb 21, 2013 ### tiny-tim yes, but x here is a constant, and only α is the variable if you prefer, write f(xo + α) = f(xo) + α∂f/∂x|xo + … 8. Feb 21, 2013 ### matematikuvol Ok but that is equal to $$\sum^{\infty}_{n=0}\frac{\alpha^n}{n!}(\frac{df}{dx})_{x_0}$$ and how to expand now $$e^{\alpha\frac{d}{dx}}$$ 9. Feb 21, 2013 ### tiny-tim no, it's $\sum^{\infty}_{n=0}\frac{\alpha^n}{n!}\left(\frac{d^nf}{dx^n}\right)_{x_0}$ 10. Feb 22, 2013 ### matematikuvol I made a mistake. But I'm asking when you get that $(\frac{d^n}{dx^n})_{x_0}$? Please answer my question if you know. In $$e^{\alpha\frac{d}{dx}}=1+\alpha\frac{d}{dx}+\frac{\alpha^2}{2!}\frac{d^2}{dx^2}+...$$ you never have $x_0$. 11. Feb 23, 2013 ### tiny-tim $$\left(e^{\alpha\frac{d}{dx}}(f(x))\right)_{x_o}$$ $$= \left(\left(1+\alpha\frac{d}{dx}+\frac{\alpha^2}{2!}\frac{d^2}{dx^2}+...\right)(f(x))\right)_{x_o}$$ $$= \sum^{\infty}_{n=0}\frac{\alpha^n}{n!}\left(\frac{d^nf}{dx^n}\right)_{x_0}$$
2018-03-18 08:55:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7126240730285645, "perplexity": 2400.011670428836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645550.13/warc/CC-MAIN-20180318071715-20180318091715-00337.warc.gz"}
https://mathinthespotlight.wordpress.com/tag/mersenne-primes/
# How long is 2 to 77,232,917 minus 1 The largest known prime seems to be updated about once every two years. On December 26, 2017, the newest largest known prime number was discovered. The prior discovery was in January 7, 2016 (it was found by a machine on September 17, 2015 but no human took note of it until January 7, 2016). The first digits of 2 to 77,232,917 minus 1 Like the previous largest prime, this freshly found largest prime number is of the form $2^p-1$. That is, it is a power of 2 minus 1. In particular, the $p$ for this new largest prime is 77,232,917. So the number is obtained by multiplying 2 by itself 77,232,917 times and subtracting 1. This number consists of over 23 million decimal digits (23,249,425 to be precise). If writing 5 digits per inch, all the digits of this prime number would cover a stretch of highway over 73 miles in length! That’s only 5 miles under the distance of three marathons. The previous largest prime number that was discovered in January 2016 consists of only about 22 million digits (22,338,618 to be precise). These would cover about 70.5 miles. So the newly discovered largest prime number would stretch out a further distance of about 2.5 miles! Here’s the first 120 and the last 120 digits of this new largest prime number. 4673331833592310999883355855611155212513 2110281771449579858233859356792348052117 7207484311099740208849621368090038049317 … (the middle 23,249,185 digits omitted) … 2853760045187860554022233766729256792821 3196546734339594539737047636927989462799 9939614659217371136582730618069762179071 Any prime number of the form $2^p-1$ is called a Mersenne prime, in honor of the French monk Marin Mersenne, who studied these numbers more than 350 years ago. The latest largest prime is a Mersenne prime. So is the one before that. In fact, most of the record largest primes are of this type. There is a large worldwide community of volunteers who devote their free time in hunting for Mersenne primes, called Great Internet Mersenne Prime Search (GIMPS for short). The prime number $2^{77232917}-1$ is the 50th known Mersenne prime. Its discoverer, Jonathan Pace, is a GIMPS volunteer for over 14 years. The recent discovery is exciting news. The discovery of such a large prime is akin to the scaling of a new and higher Mount Everest. The even more exiting news is that there are plenty more Mount Everest waiting to be discovered and scaled. Euclid proved around 2000 years ago that there are infinitely many prime numbers. Theoretically we know there is no such thing as the largest prime. When we speak of the largest prime, it is only the largest prime number verified by the computing resource that is currently available. For more information about the latest new largest prime, see the website for GIMPS or Google the Internet. The news of this new discovery is reported in this piece from npr.org. A piece in a companion blog has a basic discussion on Mersenne primes. $\text{ }$ $\text{ }$ $\text{ }$ Dan Ma math Daniel Ma mathematics Dan Ma prime numbers $\copyright$ 2018 – Dan Ma
2019-01-24 05:36:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18348181247711182, "perplexity": 621.852171833711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584518983.95/warc/CC-MAIN-20190124035411-20190124061355-00045.warc.gz"}
https://www.jobilize.com/trigonometry/course/7-4-the-other-trigonometric-functions-by-openstax?qcr=www.quizover.com&page=5
# 7.4 The other trigonometric functions  (Page 6/14) Page 6 / 14 ## Key equations Tangent function $\mathrm{tan}\text{\hspace{0.17em}}t=\frac{\mathrm{sin}\text{\hspace{0.17em}}t}{\mathrm{cos}\text{\hspace{0.17em}}t}$ Secant function $\mathrm{sec}\text{\hspace{0.17em}}t=\frac{1}{\mathrm{cos}\text{\hspace{0.17em}}t}$ Cosecant function $\mathrm{csc}\text{\hspace{0.17em}}t=\frac{1}{\mathrm{sin}\text{\hspace{0.17em}}t}$ Cotangent function $\text{cot}\text{\hspace{0.17em}}t=\frac{1}{\text{tan}\text{\hspace{0.17em}}t}=\frac{\text{cos}\text{\hspace{0.17em}}t}{\text{sin}\text{\hspace{0.17em}}t}$ ## Key concepts • The tangent of an angle is the ratio of the y -value to the x -value of the corresponding point on the unit circle. • The secant, cotangent, and cosecant are all reciprocals of other functions. The secant is the reciprocal of the cosine function, the cotangent is the reciprocal of the tangent function, and the cosecant is the reciprocal of the sine function. • The six trigonometric functions can be found from a point on the unit circle. See [link] . • Trigonometric functions can also be found from an angle. See [link] . • Trigonometric functions of angles outside the first quadrant can be determined using reference angles. See [link] . • A function is said to be even if $\text{\hspace{0.17em}}f\left(-x\right)=f\left(x\right)\text{\hspace{0.17em}}$ and odd if $\text{\hspace{0.17em}}f\left(-x\right)=-f\left(x\right)\text{\hspace{0.17em}}$ for all x in the domain of f. • Cosine and secant are even; sine, tangent, cosecant, and cotangent are odd. • Even and odd properties can be used to evaluate trigonometric functions. See [link] . • The Pythagorean Identity makes it possible to find a cosine from a sine or a sine from a cosine. • Identities can be used to evaluate trigonometric functions. See [link] and [link] . • Fundamental identities such as the Pythagorean Identity can be manipulated algebraically to produce new identities. See [link] .The trigonometric functions repeat at regular intervals. • The period $\text{\hspace{0.17em}}P\text{\hspace{0.17em}}$ of a repeating function $\text{\hspace{0.17em}}f\text{\hspace{0.17em}}$ is the smallest interval such that $\text{\hspace{0.17em}}f\left(x+P\right)=f\left(x\right)\text{\hspace{0.17em}}$ for any value of $\text{\hspace{0.17em}}x.$ • The values of trigonometric functions can be found by mathematical analysis. See [link] and [link] . • To evaluate trigonometric functions of other angles, we can use a calculator or computer software. See [link] . ## Verbal On an interval of $\text{\hspace{0.17em}}\left[0,2\pi \right),$ can the sine and cosine values of a radian measure ever be equal? If so, where? Yes, when the reference angle is $\text{\hspace{0.17em}}\frac{\pi }{4}\text{\hspace{0.17em}}$ and the terminal side of the angle is in quadrants I and III. Thus, a $\text{\hspace{0.17em}}x=\frac{\pi }{4},\frac{5\pi }{4},$ the sine and cosine values are equal. What would you estimate the cosine of $\text{\hspace{0.17em}}\pi \text{\hspace{0.17em}}$ degrees to be? Explain your reasoning. For any angle in quadrant II, if you knew the sine of the angle, how could you determine the cosine of the angle? Substitute the sine of the angle in for $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ in the Pythagorean Theorem $\text{\hspace{0.17em}}{x}^{2}+{y}^{2}=1.\text{\hspace{0.17em}}$ Solve for $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ and take the negative solution. Describe the secant function. Tangent and cotangent have a period of $\text{\hspace{0.17em}}\pi \text{.}\text{\hspace{0.17em}}$ What does this tell us about the output of these functions? The outputs of tangent and cotangent will repeat every $\text{\hspace{0.17em}}\pi \text{\hspace{0.17em}}$ units. ## Algebraic For the following exercises, find the exact value of each expression. $\mathrm{tan}\text{\hspace{0.17em}}\frac{\pi }{6}$ $\mathrm{sec}\text{\hspace{0.17em}}\frac{\pi }{6}$ $\frac{2\sqrt{3}}{3}$ $\mathrm{csc}\text{\hspace{0.17em}}\frac{\pi }{6}$ $\mathrm{cot}\text{\hspace{0.17em}}\frac{\pi }{6}$ $\sqrt{3}$ $\mathrm{tan}\text{\hspace{0.17em}}\frac{\pi }{4}$ $\mathrm{sec}\text{\hspace{0.17em}}\frac{\pi }{4}$ $\sqrt{2}$ $\mathrm{csc}\text{\hspace{0.17em}}\frac{\pi }{4}$ $\mathrm{cot}\text{\hspace{0.17em}}\frac{\pi }{4}$ 1 $\mathrm{tan}\text{\hspace{0.17em}}\frac{\pi }{3}$ $\mathrm{sec}\text{\hspace{0.17em}}\frac{\pi }{3}$ 2 $\mathrm{csc}\text{\hspace{0.17em}}\frac{\pi }{3}$ $\mathrm{cot}\text{\hspace{0.17em}}\frac{\pi }{3}$ $\frac{\sqrt{3}}{3}$ For the following exercises, use reference angles to evaluate the expression. $\mathrm{tan}\text{\hspace{0.17em}}\frac{5\pi }{6}$ $\mathrm{sec}\text{\hspace{0.17em}}\frac{7\pi }{6}$ $-\frac{2\sqrt{3}}{3}$ $\mathrm{csc}\text{\hspace{0.17em}}\frac{11\pi }{6}$ $\mathrm{cot}\text{\hspace{0.17em}}\frac{13\pi }{6}$ $\sqrt{3}$ $\mathrm{tan}\text{\hspace{0.17em}}\frac{7\pi }{4}$ $\mathrm{sec}\text{\hspace{0.17em}}\frac{3\pi }{4}$ $-\sqrt{2}$ $\mathrm{csc}\text{\hspace{0.17em}}\frac{5\pi }{4}$ $\mathrm{cot}\text{\hspace{0.17em}}\frac{11\pi }{4}$ –1 $\mathrm{tan}\text{\hspace{0.17em}}\frac{8\pi }{3}$ $\mathrm{sec}\text{\hspace{0.17em}}\frac{4\pi }{3}$ -2 $\mathrm{csc}\text{\hspace{0.17em}}\frac{2\pi }{3}$ $\mathrm{cot}\text{\hspace{0.17em}}\frac{5\pi }{3}$ $-\frac{\sqrt{3}}{3}$ $\mathrm{tan}\text{\hspace{0.17em}}225°$ $\mathrm{sec}\text{\hspace{0.17em}}300°$ 2 $\mathrm{csc}\text{\hspace{0.17em}}150°$ $\mathrm{cot}\text{\hspace{0.17em}}240°$ $\frac{\sqrt{3}}{3}$ $\mathrm{tan}\text{\hspace{0.17em}}330°$ $\mathrm{sec}\text{\hspace{0.17em}}120°$ –2 $\mathrm{csc}\text{\hspace{0.17em}}210°$ $\mathrm{cot}\text{\hspace{0.17em}}315°$ –1 If $\text{\hspace{0.17em}}\text{sin}\text{\hspace{0.17em}}t=\frac{3}{4},$ and $\text{\hspace{0.17em}}t\text{\hspace{0.17em}}$ is in quadrant II, find $\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}t,\mathrm{sec}\text{\hspace{0.17em}}t,\mathrm{csc}\text{\hspace{0.17em}}t,\mathrm{tan}\text{\hspace{0.17em}}t,$ and $\text{\hspace{0.17em}}\mathrm{cot}\text{\hspace{0.17em}}t.$ If $\text{\hspace{0.17em}}\text{cos}\text{\hspace{0.17em}}t=-\frac{1}{3},$ and $\text{\hspace{0.17em}}t\text{\hspace{0.17em}}$ is in quadrant III, find $\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}t,\mathrm{sec}\text{\hspace{0.17em}}t,\mathrm{csc}\text{\hspace{0.17em}}t,\mathrm{tan}\text{\hspace{0.17em}}t,$ and $\text{\hspace{0.17em}}\mathrm{cot}\text{\hspace{0.17em}}t.$ $\mathrm{sin}\text{\hspace{0.17em}}t=-\frac{2\sqrt{2}}{3},\mathrm{sec}\text{\hspace{0.17em}}t=-3,\mathrm{csc}\text{\hspace{0.17em}}t=-\frac{3\sqrt{2}}{4},\mathrm{tan}\text{\hspace{0.17em}}t=2\sqrt{2},\mathrm{cot}\text{\hspace{0.17em}}t=\frac{\sqrt{2}}{4}$ what is the function of sine with respect of cosine , graphically tangent bruh Steve cosx.cos2x.cos4x.cos8x sinx sin2x is linearly dependent what is a reciprocal The reciprocal of a number is 1 divided by a number. eg the reciprocal of 10 is 1/10 which is 0.1 Shemmy Reciprocal is a pair of numbers that, when multiplied together, equal to 1. Example; the reciprocal of 3 is ⅓, because 3 multiplied by ⅓ is equal to 1 Jeza each term in a sequence below is five times the previous term what is the eighth term in the sequence I don't understand how radicals works pls How look for the general solution of a trig function stock therom F=(x2+y2) i-2xy J jaha x=a y=o y=b sinx sin2x is linearly dependent cr root under 3-root under 2 by 5 y square The sum of the first n terms of a certain series is 2^n-1, Show that , this series is Geometric and Find the formula of the n^th cosA\1+sinA=secA-tanA Wrong question why two x + seven is equal to nineteen. The numbers cannot be combined with the x Othman 2x + 7 =19 humberto 2x +7=19. 2x=19 - 7 2x=12 x=6 Yvonne because x is 6 SAIDI what is the best practice that will address the issue on this topic? anyone who can help me. i'm working on my action research. simplify each radical by removing as many factors as possible (a) √75 how is infinity bidder from undefined?
2020-07-13 11:06:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 69, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9275925755500793, "perplexity": 661.5654047237784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657143365.88/warc/CC-MAIN-20200713100145-20200713130145-00014.warc.gz"}
https://math.stackexchange.com/questions/3038666/distance-between-two-polynomials-inner-product
# Distance between two polynomials (inner product) I don't know how I've gotten this question wrong. I have to compute the distance between: $$f(t) = 2t + 3$$ and $$g(t) = 3t^2 -1$$ Their inner product is defined as $$\int_{0}^{1}f(t)g(t)dt$$ So I figured the distance would be $$\sqrt{(f-g,f-g)}$$ Where $$(f-g,f-g)$$ is the inner product of f-g with itself. I got answer answer of $$\sqrt{\frac{242}{15}}$$ but my book says $$\sqrt{\frac{123}{10}}$$ and I don't understand why. I've checked that the integral evaluates to my answer so I don't think I've made a calculation error so maybe the error is in my setup? • I agree with your answer, not with the one from your book. Maybe some more details about the problem description can clarify? Or maybe the book just has a mistake in it. – SmileyCraft Dec 13 '18 at 22:43 • Texts are full of errors, especially in numerical results like this. It may well be that the text is wrong and you are right. Your setup looks right to me, and using that, your computation is definitely right. My recommendation to you is to relax. – Lubin Dec 13 '18 at 22:46
2019-08-26 05:06:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8863845467567444, "perplexity": 194.34829445439}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330968.54/warc/CC-MAIN-20190826042816-20190826064816-00272.warc.gz"}
http://mathhelpforum.com/advanced-algebra/185238-first-isomorphism-theorem-quotient-group-question.html
# Math Help - First Isomorphism Theorem/quotient group question 1. ## First Isomorphism Theorem/quotient group question Let G be the group of invertible upper triangular 2x2 matrices over the real numbers. Determine if the following are normal subgroups and if they are use the frist isomorphism theorem to identify the quetient group G/H. where H is a) $a_{11}=1$ b) $a_{11}=a_{22}$ I have checked (hopefully correctly) that these are both normal subgroup (via $gHg^{-1}=H$) However I am unsure as to what the next part is actually asking me? Thanks for any help 2. ## Re: First Isomorphism Theorem/quotient group question Originally Posted by hmmmm Let G be the group of invertible upper triangular 2x2 matrices. Determine if the following are normal subgroups and if they are use the frist isomorphism theorem to identify the quetient group H. where H is a) $a_{11}=1$ b) $a_{11}=a_{22}$ I have checked (hopefully correctly) that these are both normal subgroup (via $gHg^{-1}=H$) However I am unsure as to what the next part is actually asking me? Thanks for any help For a) the question is now wanting you to find a map from G to another group, H, such that the kernel is the set of all matrices such that $a_{11}=1$. It is analogous for b). If you have any trouble finding these maps, just ask. People will happily surrender the answers, but you finding them yourself is much, much more useful! Finally, what are your matrices over? $\mathbb{Z}$? $\mathbb{Q}$? $\mathbb{R}$? Some arbitrary field? (The question doesn't really make sense unless you know this...) 3. ## Re: First Isomorphism Theorem/quotient group question Sorry I should have said that it is over the real numbers I have edited it now. Im a bit confused by your answer I want to find an isomorphism from G to H (H being a subgroup of G this isnt possible is it?) or do you just mean a different group H' Thanks for the help sorry for my confusion for a So am I looking for a map $\phi:G\rightarrow G'$ such that $\phi(g)\rightarrow a_{11}$ and where $G'=(\mathbb{R^*},\times)$? So the quotient group G/H is isomorphic to $\mathbb{R^*},\times)$ 4. ## Re: First Isomorphism Theorem/quotient group question Originally Posted by hmmmm Sorry I should have said that it is over the real numbers I have edited it now. Im a bit confused by your answer I want to find an isomorphism from G to H (H being a subgroup of G this isnt possible is it?) or do you just mean a different group H' Thanks for the help sorry for my confusion for a So am I looking for a map $\phi:G\rightarrow G'$ such that $\phi(g)\rightarrow a_{11}$ and where $G'=(\mathbb{R^*},\times)$? So the quotient group G/H is isomorphic to $\mathbb{R^*},\times)$ Yeah, sorry, I just meant a different group. The isomorphism looks correct, and would be my first guess, but you should check first that it is well-defined, surjective, and that the kernel is what you want it to be...
2015-11-26 16:09:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9003639221191406, "perplexity": 365.83302326126005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447758.91/warc/CC-MAIN-20151124205407-00322-ip-10-71-132-137.ec2.internal.warc.gz"}
https://modecor.com.sa/the-odyssey-bulujj/yeast-fermentation-definition-cdf788
In the brewing of most traditional beer, the sugars are derived mainly from malted barley, although other cereal sources and other plant sugars can also be used. Yeast fermentation is a metabolic process that occurs when the yeast feeds off a range of carbohydrates (starches and sugars) that are in the flour, breaking them down and releasing carbon dioxide, ethanol, flavour and energy. The alcohol in bread dissipates in baking. fermentation meaning: 1. a process of chemical change in food or drink because of the action of yeast or bacteria, which…. Yeast is the most commonly used leavener in bread baking and the secret to great bread making lies in its fermentation, or the metabolic action of yeast. In sparkling wines and beer some of … fermentation A form of anaerobic respiration occurring in certain microorganisms, e.g. Yeast are the main drivers of quality beer making. The following is the word equation for fermentation pathway in plant and yeast. Fermentation. Yeast can also ferment in presence of oxygen as long as the sugars in the wort are over 0.1%. For anaerobic fermentation, the yield was 0.59 cmol eth/cmol glc and the volumetric production rate was 0.23 g eth (lh). Lactic acid fermentation. Fermentation is a chemical change that happens in vegetable and animal substances. Alcoholic food fermentation is only performed by yeast, leading to the release of ethanol and carbon dioxide. Yeast ferment malt sugars, creating carbon dioxide and alcohol as by-products. Fermentation definition: a chemical reaction in which a ferment causes an organic molecule to split into simpler... | Meaning, pronunciation, translations and examples See more. Lager Yeast, Saccharomyces pastorianus, is a bottom fermenting yeast used for brewing lager style beers. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. The carbon dioxide gas bubbles out of the solution Alcoholic fermentation occurs by the action of yeast; lactic acid fermentation, by the action of bacteria. Fermentation is an anaerobic process in which energy can be released from glucose even though oxygen is not available. Yeast and Fermentation. Dictionary ! yeast definition: 1. a type of fungus that is used in making alcoholic drinks such as beer and wine, and for making…. Fermentation definition, the act or process of fermenting. The balanced equation for fermentation is. It is a necessary process in winemaking, in order to make the wine alcoholic. All-Purpose Flour Learn more. Menu. Yeast fermentation: biochemical process of energy production under anaerobic conditions. See more. A simple fermentation definition can be: the process of breaking down of complex substances into a simpler form. This product was developed in the 1940’s for use by the armed forces. Fermentation is defined as a chemical change brought about using microorganisms, e.g., in the biotechnology industry for production of pharmaceuticals, food additives, and animal feed-stuffs. It is physiologically distinct from the top fermenting (so called because it forms a thick foam at the top of the wort during fermentation) ale yeast S. cerevisiae in its abilities to ferment at cooler temperatures and to ferment the sugar melibiose. The yield of ethanol in cmol for aerobic fermentation was 0.47 cmol eth/cmol glc, and the volumetric production rate was 0.31 g eth (lh). In the winemaking process, fermentation starts during crushing and can last until after bottling. $glucose\to{carbon~dioxide}+ethanol+energy$ This process is irreversible as … Fermentation occurs in yeast cells and bacteria and also in the muscles of animals. fermentation, process by which the living cell is able to obtain energy through the breakdown of glucose glucose, dextrose, or grape sugar, monosaccharide sugar with the empirical formula C 6 H 12 O 6.This carbohydrate occurs in the sap of most plants and in the juice of grapes and other fruits. Fermentation definition is - a chemical change with effervescence. (Lactic acid fermentation also occurs in human muscle cells. n. 1. The two types of fermentation that are the most common are ethanol and lactic acid fermentation. It is the magical process that allows a dense mass of dough to become a well-risen and flavorful loaf of bread. yeasts. In the case of brewing beer, the malt and hops are the source of sugar for the yeast. In baking, fermentation happens when yeast and bacteria convert sugars mainly into carbon dioxide. Fermentation is the breakdown of carbs like starch and sugar by bacteria and yeast and an ancient technique of preserving food. Yeast definition, any of various small, single-celled fungi of the phylum Ascomycota that reproduce by fission or budding, the daughter cells often remaining attached, and that are capable of fermenting carbohydrates into alcohol and carbon dioxide. Active Dry Yeast Yeast that has been dried, forming small dehydrated granules. What a Stuck Fermentation Isn’t Let’s go over a few things you shouldn’t confuse for a stuck fermentation. Enzymes - Yeast - Fermentation - Alcohol.. What is Fermentation?. The amount of yeast required to increase the bread dough volume depends on the type of yeast used, by temperature of the dough and the duration of the fermentation time [2]. Fermentation is the name given to the process where a sugar solution containing yeast is turned into alcohol ().. Dense mass of dough to become a well-risen and flavorful loaf of.. 1940 ’ s for use by the armed forces occurs by the action of yeast or bacteria, which… seeing... As long as the sugars in the muscle cells Flour fermentation a form fermentation! To alcohol and the volumetric production rate was 0.23 g eth ( lh.. Plant and yeast and bacteria and in the winemaking process that allows a dense mass of dough to become well-risen. Alcoholic fermentation occurs in yeast, leading to the process of breaking down of complex substances a. Was developed in the wort are over 0.1 % located, the entire fermentation cycle can be in! In yeast cells and bacteria convert sugars mainly into carbon dioxide in wort! ( see baker 's yeast ) 're seeing this message, it means we 're trouble! Was 0.59 cmol eth/cmol glc and the volumetric production rate was 0.23 eth... Form lactic acid, requiring no heat in preparation external resources on our yeast fermentation definition turned into alcohol ( ) reactions. Cells of animals What a Stuck fermentation Isn ’ t Let ’ s for by. That happens in vegetable and animal substances into alcohol ( ) web filter, please make sure the... Dry yeast yeast that has been dried, forming small dehydrated granules the yeast ferment! G eth ( lh ) definition can be: the process of breaking down of complex substances a... It means we 're having trouble loading external resources on our website for the yeast word equation fermentation. ” are converted by yeast, leading to the release of ethanol and carbon dioxide alcohol! A winemaking process that uses yeast to convert the sugars in the winemaking that! An ancient technique of preserving food s for use by the armed forces including esters, phenols, and form!, forming small dehydrated granules s for use by the armed forces the. Including esters, phenols, and a large variety of other chemicals, sugars are transformed preferably into and! Breaking down of complex substances into a simpler form in the 1940 ’ s go over a things... While in your muscles, they make lactic acid of bacteria production under anaerobic conditions cmol eth/cmol and! Beer making make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked yeast fermentation! Long as the yeast fermentation definition in the 1940 ’ s go over a few things you ’! After bottling malt sugars, creating carbon dioxide 're seeing this message, it we! + hydrogen ( NADH ) to form lactic acid, requiring no heat in preparation volumetric rate... This message, it means we 're having trouble loading external resources on our website,... Enzymes - yeast - fermentation - alcohol.. What is fermentation? and other foods 's )... Make alcohol, while in your muscles, they make lactic acid fermentation is name... Hops are the main drivers of quality beer making sugars are transformed preferably into ethanol and carbon dioxide – of. Respiration occurring in certain microorganisms, e.g real-time, allowing optimization of baking. The word equation for fermentation pathway in plant and yeast fermentation that are the drivers... Fermentation Isn ’ t Let ’ s go over a few things you ’... Industries ( see baker 's yeast ) fermentation: biochemical process of chemical change with effervescence sour foods like,! ( see baker 's yeast ) please make sure that the domains * and! Eth/Cmol glc and the volumetric production rate was 0.23 g eth ( lh ) sugars are transformed into! Also ferment in presence of oxygen as long as the year 7000 BC all-purpose Flour fermentation a form of respiration. Sparkling wines and beer some of … fermentation is the basis of the baking and brewing industries see. Can also ferment in presence of oxygen as long as the year 7000.! Dioxide, and other foods real-time, allowing optimization of the process of fermenting forming! Other chemicals other foods of fungus that is used in making beverages such as beer and wine, for... The source of sugar for the yeast whereby “ sugars ” are converted by yeast, leading the... In bacteria and also in the wort are over 0.1 % 's yeast ) and for making…,! Was 0.23 g eth ( lh ) in plant and yeast and ancient... If you 're seeing this message, it means we 're having trouble loading resources... Hops are the source of sugar for the yeast is broken down fermentation is only performed by yeast the. In vegetable and animal substances of animals to alcohol, carbon dioxide definition of fermentation filter! In grape juice to alcohol, carbon dioxide with effervescence in winemaking, in order to make,! Under anaerobic conditions and wine, and other foods rate was 0.23 g eth ( lh ) pyruvic... Of oxygen as long as the year 7000 BC microorganisms, e.g translation, English dictionary definition of fermentation used. Make alcohol, while in your muscles, they make lactic acid, requiring no in! The year 7000 BC the volumetric production rate was 0.23 g eth ( lh ) ; lactic and. Is only performed by yeast to alcohol, while in your muscles, they make lactic acid and the!, creating carbon dioxide and alcohol as by-products and hops are the common... Wine, beer, wine, since as early as the sugars in the wort are 0.1. It means we 're having trouble loading external resources on our website anaerobic... Grape juice to alcohol of carbs like starch and sugar by bacteria and in the muscle cells of.... Sugars are transformed preferably into ethanol and lactic acid fermentation, by the action of bacteria of animals a of. Adenine dinucleotide + hydrogen ( NADH yeast fermentation definition to form lactic acid fermentation also occurs yeast! Yeast can also ferment in presence of oxygen as long as the year 7000 BC definition 1.... Animal substances the anaerobic reactions make alcohol, while in your muscles, they make lactic acid juice to.... Of quality beer making beer making also ferment in presence of oxygen as long as the year 7000 BC wine. Lh ) the muscle cells of animals make sure that the domains *.kastatic.org and.kasandbox.org... Types of fermentation be monitored in real-time, allowing optimization of the process of breaking down complex. In presence of oxygen as long as the year 7000 BC like pickles, sauerkraut, kimchi and... Of fermentation yeast fermentation definition used in making alcoholic drinks such as beer and wine bread... Convert sugars mainly into carbon dioxide, and other foods other foods, sauerkraut kimchi! Can last until after bottling cells and bacteria convert sugars mainly into carbon dioxide of! While in your muscles, they make lactic acid, requiring no heat in preparation things. Technique of preserving food synonyms, fermentation translation, English dictionary definition of.!, yeast produce a whole range of flavoring compounds, including esters phenols... Common are ethanol and lactic acid and an ancient technique of preserving food and an technique. Fermentation a form of anaerobic respiration occurring in certain microorganisms, e.g entire cycle! When yeast and bacteria and yeast and bacteria and also in the cells! Sugar by bacteria and in the muscle cells of animals typically in a brewing fermentation, the malt hops! Performed by yeast, leading to the release of ethanol and lactic acid and NAD+: 1. type. Eth ( lh ) few things you shouldn ’ t confuse for a fermentation. Of complex substances into a simpler form make bread, and heat use the... Has been dried, forming small dehydrated granules early as the year 7000 BC for by... Quality beer making hops are the main drivers of quality beer making occurs by the action of bacteria last after... Of years people have used fermentation to make bread, and yogurt a type fungus. Pronunciation, fermentation pronunciation, fermentation translation, English dictionary definition of.... Cells and bacteria and also in the winemaking process, fermentation starts during crushing and can last until bottling... For the yeast malt sugars, creating carbon dioxide necessary process in winemaking, in order to beer. Or drink because of the baking and brewing industries ( see baker 's yeast ), including,. In a brewing fermentation, the anaerobic reactions make alcohol, carbon dioxide see 's. Broken down strains and bacteria convert starches or sugars into lactic acid used for sour foods pickles. Are the most common are ethanol and lactic acid fermentation and yogurt typically in brewing! Flavorful loaf of bread of bread it is the word equation for fermentation pathway in and... All-Purpose Flour fermentation a form of fermentation that are the source of sugar for the yeast types of fermentation are. In human muscle cells of animals 's yeast ) substances into a simpler form mainly carbon! And wine, since as early as the year 7000 BC yeast the!, wine, since as early as the sugars in the 1940 ’ s go over few... You 're seeing this message, it means we 're having trouble loading external resources on our website things! Alcoholic drinks such as beer and wine, and yogurt of anaerobic respiration occurring certain... Allows a dense mass of dough to become a well-risen and flavorful loaf of bread, including,! Are unblocked that the domains *.kastatic.org and *.kasandbox.org are unblocked brewing fermentation by... 0.59 cmol eth/cmol glc and the volumetric production rate was 0.23 g eth ( lh ) this product developed. Domains *.kastatic.org and *.kasandbox.org are unblocked t confuse for a Stuck fermentation Isn ’ t Let s!
2021-04-19 11:33:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32760611176490784, "perplexity": 9529.904585219478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879374.66/warc/CC-MAIN-20210419111510-20210419141510-00547.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-3-review-exercises-page-513/60
## Precalculus (6th Edition) Blitzer By the zero product property, a product is zero only when a factor of the product is zero. Here, on the LHS, the factor $\ln 1=0$ (a basic property of logarithms) Thus the statement is true.
2023-03-31 18:46:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9279441237449646, "perplexity": 375.37524562185376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949678.39/warc/CC-MAIN-20230331175950-20230331205950-00338.warc.gz"}
https://mathematica.stackexchange.com/questions/138317/calculate-sift-features-at-specific-image-positions
Calculate SIFT features at specific image positions Is it possible to use ImageKeypoints[...] to calculate SIFT feature vectors at specific image coordinates? • ImageKeypoints[] uses SURF transform, which is often superior to SIFT. But I think you would need to code up your own SIFT feature transform. – David G. Stork Feb 21 '17 at 21:45 • Is it possible to calculate SURF features at specific grid positions? I need this for feature extraction on texture image patches. – Robinaut Feb 21 '17 at 21:47 • A simple hack would be to extract the small patch of the image centered on the point in question, then apply ImageKeypoints[] with a threshold that returned a very large number of features, and choose the returned feature closest to the center. – David G. Stork Feb 21 '17 at 21:50
2020-08-15 20:59:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3172747492790222, "perplexity": 1364.1171556288011}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439741154.98/warc/CC-MAIN-20200815184756-20200815214756-00090.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=129&t=18056&view=print
Page 1 of 1 ### entropy equations Posted: Fri Jan 20, 2017 11:27 pm In class we talked about how $\Delta S= k_b ln\frac{W_2}{W_1} = k_b ln\frac{V_2}{V_1}$ does that mean W = V, and why is that? ### Re: entropy equations Posted: Fri Jan 20, 2017 11:34 pm I don't think it means volume is equal to work, but work is measured in the change in the volume of a system (the work done by the system or the work done to a system). I believe it is related to the relation q=-w, which is equal to nRT ln V2/V1. I believe thats how the W2/W1 becomes the V2/V1 in the equation. ### Re: entropy equations Posted: Sat Jan 21, 2017 12:43 pm But I don't think W is work? It's the W for degeneracy I believe ### Re: entropy equations Posted: Sun Jan 22, 2017 10:02 am Hey Teresa! W is the degeneracy. Degeneracy identifies how many different configurations there are that have the same energy. Therefore, you can imagine that if I double the volume of a gas I double the amount of configurations the molecules could obtain.
2020-09-28 23:06:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7314614057540894, "perplexity": 1789.3961304523798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401614309.85/warc/CC-MAIN-20200928202758-20200928232758-00472.warc.gz"}
https://www.physicsforums.com/threads/testing-for-divergence.161090/
# Testing for divergence ## Homework Statement Does the sum of the series from n=1 to infinity of 1+sin(n)/10^n converge or diverge. ## The Attempt at a Solution I can use the comparison test or the limit comparison test. I'm not sure where to go from here. What can you tell me of the limit of the series as n reach infinity? Well, the top part diverges, the bottom causes it to go to 0. So I don't know what happens faster. Either it converges to 0, or it diverges. The solution must involve the comparison test or the limit comparison test. But I'm not sure what to compare it to. is the limit of the series as n goes to infinity is not 0 then the sum of the series diverge... wait is it (1+sin(n))/10^n or 1+ (sin(n)/10^n)? Try comparing sin n to n if it's (1+sin(n))/10^n then can you tell me 1+sin(n) is smaller then what for all n? It's (1+sin(n)). Hrm, smaller than 2. So I can compare it to 1/5^n. Now, I need to figure out how to prove that series converges. Is it a geometric series? Actually, I know it converges, based on the root test. But I don't think we can use the root test now. Last edited: right but 1/5^n is wrong, keep it 2/10^n, now can you tell me if you know the root or the ratio test of a series? alright. So root test gives me limit of 2^1/n / 10. I don't know what 2 ^1/n goes to. Is that even possible? The root test is for when n goes to infinity.. 1/n~0--->2^1/n=1,so (2^1/n)/10<1 you have just now proved that the series 2/10^n converge, how can you relate this to the series you started with?
2021-04-17 16:29:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8537480235099792, "perplexity": 461.5503636720391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038461619.53/warc/CC-MAIN-20210417162353-20210417192353-00171.warc.gz"}
https://eprint.iacr.org/2021/090
## Cryptology ePrint Archive: Report 2021/090 A New Twofold Cornacchia-Type Algorithm and Its Applications Bei Wang; Yi Ouyang; Honggang Hu ; Songsong Li Abstract: We focus on exploring more potential of Longa and Sica's algorithm (ASIACRYPT 2012), which is an elaborate iterated Cornacchia algorithm that can compute short bases for 4-GLV decompositions. The algorithm consists of two sub-algorithms, the first one in the ring of integers $\mathbb{Z}$ and the second one in the Gaussian integer ring $\mathbb{Z}[i]$. We observe that $\mathbb{Z}[i]$ in the second sub-algorithm can be replaced by another Euclidean domain $\mathbb{Z}[\omega]$ $(\omega=\frac{-1+\sqrt{-3}}{2})$. As a consequence, we design a new twofold Cornacchia-type algorithm with a theoretic upper bound of output $C\cdot n^{1/4}$, where $C=\frac{3+\sqrt{3}}{2}\sqrt{1+|r|+|s|}$ with small values $r, s$ given by the curves. The new twofold algorithm can be used to compute $4$-GLV decompositions on two classes of curves. First it gives a new and unified method to compute all $4$-GLV decompositions on $j$-invariant $0$ elliptic curves over $\mathbb{F}_{p^2}$. Second it can be used to compute the $4$-GLV decomposition on the Jacobian of the hyperelliptic curve defined as $\mathcal{C}/\mathbb{F}_{p}:y^{2}=x^{6}+ax^{3}+b$, which has an endomorphism $\phi$ with the characteristic equation $\phi^2+\phi+1=0$ (hence $\mathbb{Z}[\phi]=\mathbb{Z}[\omega]$). As far as we know, none of the previous algorithms can be used to compute the $4$-GLV decomposition on the latter class of curves. Category / Keywords: public-key cryptography / Elliptic curves, Hyperelliptic curves, Endomorphisms, 4-GLV decompositions, Twofold Cornacchia-type algorithms. Date: received 24 Jan 2021, last revised 11 May 2021 Contact author: wangbei at mail ustc edu cn Available format(s): PDF | BibTeX Citation Short URL: ia.cr/2021/090 [ Cryptology ePrint archive ]
2022-01-28 11:02:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8579156398773193, "perplexity": 1772.4644842778605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305494.6/warc/CC-MAIN-20220128104113-20220128134113-00065.warc.gz"}
https://stats.stackexchange.com/questions/66906/ideal-learning-sample-in-machine-learning
# Ideal learning sample in machine learning I am constructing a model for the prediction of a binary (Yes/No) outcome. I have a learning sample that gives the machine 1500 examples of the "Yes" group and 500 example of the "No" group. Should I be using all the data I have for input to learn the machine? Would this be biased towards the "Yes"? I had the thought of giving 500 "Yes" and 500 "No" examples, but I am not sure if this is going to positively or negatively my future predictions. Thanks. ## 2 Answers Most learning algorithms have a way to deal with skewed data sets. In general, use as much as you can for learning to increase generalization performance. • Thanks for your reply. Does apply even if the data was....immensely skewed? like 2500 Yes to 400 No? – Error404 Aug 9 '13 at 9:26 • Yes, it does. Many algorithms can deal with such things properly. – Marc Claesen Aug 9 '13 at 9:27 • @Error404 That's skewed, but not what I'd called immensely skewed. Many applications involve probabilities of $10^{-3}$ or less: rare diseases, insurance risk, credit risk, etc. – Hong Ooi Aug 9 '13 at 9:30 Questions you should ask yourself are "Why is my training set biased?" and "What will the data look like at application time?" If the bias is a natural property of the data, that bias should probably be represented in the training set as well. If the bias is a selection bias due to the way the data was collected, you should try to compensate. For most classifiers, training algorithms and datasets, the bias will affect the prediction! However, this bias might be desirable. When doing face recognition, for example, and your classifier is going to be applied to your photo albums, you want your training algorithm to be able to use the fact that most pictures will be pictures of you. However, if your classifier is going to be used with photo albums of all kinds of people, and the only reason your training set contains more pictures of you is because they were easier to get, you should try to compensate. You don't necessarily need to throw away data to compensate for the bias. Many training algorithms allow you to weight your training examples (gradient descent and expectation maximization, for example). To get rid of any bias, try to weight positive examples by $1/\#Yes$ and negative examples by $1/\#No$, where $\#Yes$ and $\#No$ are the number of positive and negative examples, respectively. Say your dataset is $\mathcal{D} = \{ (x_n, y_n) : n=1,\dots,N\} \subseteq \mathbb{R}^N \times \{0, 1\}$. Further let $\mathcal{D}_1 = \{ (x_n, y_n) : y_n = 1 \} \subseteq \mathcal{D}$ be your positive examples, and $\mathcal{D}_0 \subseteq \mathcal{D}$ be your negative examples. Typically, the objective function optimized by your classifier during training can be written $$F(\theta) = \sum_{(x_n, y_n) \in \mathcal{D}} f(x_n, y_n; \theta),$$ where $\theta$ are some parameters that are optimized. We can split that objective function into a sum over positive and negative examples, $$F(\theta) = \sum_{(x_n, y_n) \in \mathcal{D}_0} f(x_n, y_n; \theta) + \sum_{(x_n, y_n) \in \mathcal{D}_1} f(x_n, y_n; \theta).$$ If the number of positive examples $\#\mathcal{D}_1$ is much larger than the number of negative examples $\#\mathcal{D}_0$, then the second term will dominate and the optimizer will try to make sure that positive examples are correctly classified and will care less about negative examples being wrongly classified. To compensate, we could weight the individual examples or the two terms, $$\frac{1}{\#\mathcal{D}_0} \sum_{(x_n, y_n) \in \mathcal{D}_0} f(x_n, y_n; \theta) + \frac{1}{\#\mathcal{D}_1} \sum_{(x_n, y_n) \in \mathcal{D}_1} f(x_n, y_n; \theta).$$ Now both terms contribute equally to the overall classification error or whatever you are trying to optimize. • Thanks for the detailed explanation, I am still wondering about a couple of things though. 1- I did not get how to get rid of the bias by dividing 1/#Yes and 1/#No, what do I do after I get these values? 2- The answer I had by Marc (you can see it above) was that the algorithms "deals" with skewed data rather than "applies" the skewing to the future samples. To be honest, I would think of the case the way you explained it, but does that make the above answer inaccurate? – Error404 Aug 9 '13 at 11:47 • You don't divide the data, you weight the data. I added a little bit more formal explanation to make this more clear. I can't say what kind of algorithms or automatic mechanisms @MarcClaesen had in mind when he wrote his answer. – Lucas Aug 9 '13 at 12:29 • I will check that. Cheers mate – Error404 Aug 9 '13 at 13:08
2019-12-13 21:32:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5691621899604797, "perplexity": 601.1635767522585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540569146.17/warc/CC-MAIN-20191213202639-20191213230639-00126.warc.gz"}
https://crypto.stackexchange.com/questions/30068/is-md5xmd5yx-secure?noredirect=1
# Is md5(x)&md5(y&x) secure? I was wondering if the following hash function f(x) := md5(x) & md5('abc'&x) (with & as concatenation operator) is secure. This schema can be even extended like this: g(x) := md5(x) & md5('abc'&x) & md5('def'&x) To find a collission in f(x), following requirements would be met: f(x)=f(x'), x!=x' <==> md5(x)=md5(x') ^ md5('abc'&x)=md5('abc'&x') or g(x)=g(x'), x!=x' <==> md5(x)=md5(x') ^ md5('abc'&x)=md5('abc'&x') ^ md5('def'&x)=md5('def'&x') Is this realistic that such a condition can be found? I can't imagine that. [I would be happy to learn more about the math behind it] • I agree with @NeilSmithline. My response, and probably the response of others is going to be "just avoid using MD5 and any constructions using MD5." But crypto.SE can better respond to the math part. – HexTitan Oct 24 '15 at 17:06 • For collision resistance, this question reduces to Are there MD5 collisions for inputs of different length? – CodesInChaos Oct 24 '15 at 21:57 • @CodesInChaos, how so? Does not seem like different-length collisions would be required (or immediately helpful) for breaking this. – otus Oct 25 '15 at 10:59 ## 1 Answer Secure against what and for what purpose? MD5 remains too fast for most human typed passwords. You should use something like bcrypt or PBKDF or sha256crypt where there are a tunable amount of thousands to millions (or more) rounds of hashing to generate each hash. You really don't want to allow users to try several billion hashes per second per GPU. f(x) := md5(x) ++ md5('abc' ++ x) will be just as vulnerable to preimage attacks as md5 -- that is given some hash f(x) = 5d41402abc4b2a76b9719d911017c592d76051e1dae76d1f309598102df58d84 find x. You simply truncate look at the first 128-bits of the 256-bit hash and it reduces to the problem of md5(x)=5d41402abc4b2a76b9719d911017c592, which you can type into several online md5 reverse lookups (the second half reversed too without me adding it). So you can see this method will not be secure for user chosen passwords (at least for typical users). Luckily, last I checked (and according to wikipedia), there are no practical publicly known preimage attacks on MD5 for high-entropy strings. There is a MD5 preimage attack that takes 2123.4 instead of the full expected complexity of 2128 for a 128-bit hash, but it will not be practical (about 24 times faster than brute force). Now md5 is not collision resistant. It is possible to construct pairs of text such that m1 != m2, while md5(m1) == md5(m2). Granted if you found a collision in md5, this collision would not work immediately for f(x) as in general md5('abc' ++ m1) != md5('abc' ++ m2) when md5(m1) == md5(m2). However, the underlying flaws of md5 is that is built on the Merkle-Damgård construction will still be present and probably could still be exploited (things like how md5(m1++x) == md5(m2++x) will always be true if md5(m1) == md5(m2)) and could probably still be exploited (even if its not immediately obvious and would be more complicated than simply combining existing attacks). • The password hashing part is a bit off topic. If we ask "is <hash function> secure", it usually means whether it is preimage and collision resistant. – otus Oct 25 '15 at 13:29
2020-07-13 14:43:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4692095220088959, "perplexity": 2879.575695211393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657145436.64/warc/CC-MAIN-20200713131310-20200713161310-00142.warc.gz"}
https://chemistry.stackexchange.com/questions/115937/what-reaction-makes-sodium-silicate-harden-in-co2
# What reaction makes sodium silicate harden in CO2 I have looked all over the web, and I can only find information stating that it does happen. I don't doubt that I just can't figure out what is happening. Some sources state that an acid can cause the hardening others say heat causes the hardening so my speculation would be that the water needs to be driven out for the heat to work, but I don't have a clue how the other to work. And it would imply that there are multiple routes for their reaction addition of water versus editions of some part of some acid versus addition of $$\ce{CO2}$$ so you should have all kinds of different reactions... or not what do I know? An ancillary question: when is water soluble and when it is not I found some obscure commenters on I think Reddit speculating the it's solubility would have something to do with how it was hardened. I also found on its Wikipedia page an ambiguous and Trey saying something about solubility and more seems to say that different types or ratios have different solubilities. Unfortunately I didn't collect any links and I feel like my research was so sporadic and Scattered that it would only Hinder.
2019-09-15 18:46:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4088122546672821, "perplexity": 552.8769341680112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572235.63/warc/CC-MAIN-20190915175150-20190915201150-00419.warc.gz"}
https://www.hackmath.net/en/math-problem/681?tag_id=103,65
# Negative percentage In 2004, the company had a loss of 40900 Euros. Two years later he was already in profit 48900 Eur. Calculate what percentage of the company increased profits in this two years. Result p =  -219.6 % #### Solution: $p = \dfrac{ 48900-(-40900)}{ -40900 }\cdot 100 = -219.6 \%$ Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Be the first to comment! ## Next similar math problems: 1. Profitable bank deposit 2012 Calculate the value of what money lose creditor with a deposit € 9500 for 4 years if the entire duration are interest 2.6% p.a. and tax on interest is 19% and annual inflation is 3.7% (Calculate what you will lose if you leave money lying idle at negative 2. Inflation Once upon a time, tsar owned a money printer and printed and printed. The result of printing money prices went up,in the first year 3.9 %, in the second 6%, in the third 4.7% and in the fourth 5.5%. Then tsar was failed in election. Calculate the aver 3. Sale If the product twice price cut by 25%, what percentage was price cut in total? 4. Inheritance After death father (♱ 62) remained mother (his wife) and its 3 children. Inheritance by law is that in first mother will automatically get half of the property and other one half inherite by heirs witch are mother and her 3 children by same share. Cal 5. VAT John paid 100 Euros in store for purchase. Calculate value added tax (VAT), which paid in purchase, if the VAT rate is 20%. 6. Bonus Gross wage was 527 EUR including 16% bonus. How many EUR were bonuses? 7. Discount Ladies sweater was twice discounted. First by 11%, then by 11% of the new price. Its final price was 100 €. Determine the original price of sweater. 8. Tape Video is 153% more expensive than tape recorder. How many percent is tape recorder less expensive than video? 9. Discount Product has been discounted twice by 19%. What is the total discount given? 10. Engineer Kažimír The difference between politicians-demagogues and reasonable person with at least primary education beautifully illustrated by the TV show example. "Engineer" Kažimír says that during their tenure there was a large decline in the price of natural gas, pri 11. Rates When gas consumption, the consumer may choose one of two rates: rate A - which pays 0.4 € per 1 m3 of gas a flat monthly fee of 3.9 € (regardless of consumption) rate B - which pays 0.359 € per 1 m3 of gas a flat monthly fee of 12.5 € From what monthl 12. Car range Calculate the maximum range of car, if you can spend 10 euros, price of diesel is 1.55 Eur/l and car consumption is 3 l/100 km. 13. Coffee In stock are three kinds of branded coffee prices: I. kind......205 Kc/kg II. kind......274 Kc/kg III. kind.....168 Kc/kg Mixing these three species in the ratio 8:5:6 create a mixture. What will be the price of 100 grams of this mixture? 14. Store One meter of the textile was discounted by 2 USD. Now 9 m of textile cost as before 8 m. Calculate the old and new price of 1 m of the textile. 15. Cone A2V Surface of cone in the plane is a circular arc with central angle of 126° and area 415 cm2. Calculate the volume of a cone. 16. ŽSR Calculate fixed annual personnel costs of operating monorail line 118 km long if every 5 km is station, which serve three people - one dispatcher and two switchman in 4-shift operation. Consider the average salary of the employee 885 €. 17. Cu thief The thief stole 122 meters copper wire with cross-section area of 95 mm2. Calculate how much money gets in the scrap redemption, if redeemed copper for 5.5 eur/kg? The density of copper is 8.96 t/m3.
2020-01-28 17:49:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2550176680088043, "perplexity": 5468.867274435671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251779833.86/warc/CC-MAIN-20200128153713-20200128183713-00517.warc.gz"}
https://en.m.wikisource.org/wiki/The_Aether
# The Aether The Æther. By Norman Campbell, Fellow of Trinity College, Cambridge[1]. § 1. The position of the conception of "the æther" in modern physics is anomalous and unsatisfactory. From the works of some writers it might appear that at no period was the conception of more fundamental importance or of more indisputable validity, but there are others who have ceased altogether to employ the conception and regard it as a hindrance to progress. And this conflict of opinion is of a somewhat different nature to almost all the previous disagreements which have divided men of science. The question which is involved is not primarily one of the value of experimental evidence, or of the main features of its interpretation. No doubt much of the dissatisfaction with "the æther" is based on the recent theories of the atomic [ 182 ] nature of radiation and on the proof that the principle of relativity is an adequate foundation for electromagnetic theory, but it is clear that such theories do not provide either n sufficient or a necessary reason for abandoning the conception. Sir J. J. Thomson, the author of the earliest and most far-reaching atomic theory of radiation, devoted much of his presidential address before the British Association to a description of the properties of the æther, while, on the other hand, I hope to show that a consideration of no ideas more novel than the elements of electrostatics may lead to grave doubts concerning the utility of that conception. If both sides could be induced to express their views in detail, the difference between them would be found to concern the fundamental principles of scientific knowledge rather than more special problems of observation and intuition. Perhaps it is because men of science exhibit considerable shyness in discussing the essential foundations of their study that there has been so little direct attack on or defence of the conception of the æther. The following remarks may help in some measure towards a thorough consideration of the whole of this important problem[2]. § 2. We must first inquire what is meant by "the æther," and why it was ever invented. Almost the only definition of the conception with which I am acquainted is that of the late Lord Salisbury, who described it as "the subject of the verb 'to undulate.'" It is not immediately obvious why that verb requires a special subject, but a very little consideration will make out a case which is at least prima facie plausible. The principle of the conservation of energy is perhaps the only proposition that is accepted by all physicists as a necessary basis for their science, and the maintenance of that principle would seem at first sight to require some such conception as the æther. When a body is radiating energy to another at a lower temperature separated from it by a finite distance, there is a finite interval of time during which energy has been lost by the first body and has not been gained by the second; if the energy is not to be regarded as lost altogether for that interval, it might seem that it must be regarded as gained by some third body which is neither the source nor the receiver. This body, the body which is the vehicle of the undulatory energy of light, is the æther. The development of the electromagnetic theory of light [ 183 ] has led to the belief that the energy of radiation is essentially of the same nature as that which is localized around an electrically charged body, at rest or in motion. The æther is regarded as the vehicle, not only of the energy of radiation, but also of all forms of electromagnetic energy, and we may define it simply as "the body in which electromagnetic energy is localized." So rough a definition will doubtless not be found satisfactory by all, but it will suffice for our purpose because it draws attention clearly to the features of the conception of the æther, as generally understood, which it is my present object to discuss. § 3. Of course a definition is not a proposition, and is incapable of being either true or false. Whatever definition of a scientific concept is adopted, it is always possible by framing suitably the propositions concerning it to state a theory in accordance with the observations. But, as a matter of fact, in science, as well as in other studies, the propositions are usually historically prior, though logically subsequent, to the definitions. The propositions are chosen for their simplicity, their suitability for mathematical development, or for some such reason, and the first requisite of the definitions of the concepts concerned in the propositions is that they should be such as to make these propositions true. (An obvious case of such a procedure is afforded by the concept "a perfect gas.") In the case of the æther the propositions which have to be true are represented by the six equations of Maxwell; the definition of the nether has to be chosen so that these propositions are true, when the axes of reference are "fixed in the æther." If it turns out that, with the definition adopted, these equations are not true when the axes of reference are "fixed in the æther," then we may say roughly that the definition is false, though strictly the falsity should be attributed to the equations. For the purposes of our discussion it will be convenient and will involve no loss of generality if we replace the set of equations by a single simple deduction from them — the proposition that an electric charge ${\displaystyle e}$ moving with a velocity ${\displaystyle u}$ relative to the axes of reference is equivalent to a current element of strength ${\displaystyle eu}$, the direction of which is coincident with the path of the charge. § 4. It might seem at first sight that such a definition of the æther as has been given could not possibly render such a proposition untrue, but attention must be drawn to the first words of the definition — "the body" — and to the [ 184 ] proviso in the proposition that the axes of reference are "fixed in the æther." The statement that the æther is "the body ..." undoubtedly suggests, and has been commonly taken to mean, that the æther, in so far as the relative motion of its parts is concerned, resembles a block of some solid material: that, except so far as it is disturbed by the vibrations which it transmits, its parts have no relative motion: that the motion of a body relative to the æther is uniquely determined and is, in general, unrelated to the motion of that body relative to any material system. Until quite lately it seems to have been assumed almost universally that the velocity to which the magnetic effect of a moving charge is proportional is not its velocity relative to some material system, but to some system independent of all material bodies, extended throughout the universe and having no relative motion between its parts. That such a proposition is dubitable will not be disputed when it is stated explicitly. My present object is to show that it is so far from being even inherently probable that it would never have been accepted for a moment, if it had not been for the unfortunate invention of so attractive a word as "the æther." It seems to me certain that if "the æther" had been replaced by a word in the plural number, or if to the definition offered above the words "or bodies" had been added, one of the most difficult problems of modern physics would never have been presented. § 5. Axes "fixed in the æther" involve the idea of the motion of a material system relative to the æther, or conversely, of the motion of the æther relative to a material body. Let us inquire what can be meant by such a velocity of the æther. When we speak of the velocity of a material body A relative to a body B, one of two definitions of the word "velocity" is implied, according as the bodies are solid or fluid. In the former case the velocity is the rate of change of the distance of a point on A, identified by some property distinguishing it from neighbouring points, from a point on B similarly identified[3]; in the latter case velocity means the rate of transference of the body (measured by volume) across unit cross-section. It will probably be admitted that the latter definition (which is connected with the former and fundamental definition only by our belief in quasi-solid molecules) is not relevant in the case of the æther, but the former might seem to be applicable. Consider the simple case of two or more electrically charged bodies moving with different uniform velocities relative to some observer. Round each [ 185 ] body is distributed electrostatic energy localized in the æther; the positions of the portions of the æther which contain stated amounts of energy (belonging to one and the same body), relative to each other or to the charged nucleus, are not changed by the motion. If the æther is the body where electrical energy is localized, it seems obvious and simple to identify points in the æther, as required by the definition of velocity, by the amounts of energy contained in them. Then the velocity of the æther relative to the observer would be different according as one or other of the charged bodies was considered, and would be in each case the same as the velocity of the corresponding charged body relative to the observer. § 6. Such, I think, is the simple and obvious view, leading directly to the principle of relativity, which would have been accepted without question had it not been for the use of the singular word "æther." "If," it was said, "there is only one æther, it cannot have more than one velocity relative to any one observer: hence we must suppose that portions of æther are not to be identified by the energy which they contain, that the energy moves through the pother, being transferred from one portion to another, with a velocity which has nothing to do with the velocity of the pother itself." This view is, I imagine, maintained by those who write of the æther; let us see whither it leads us. § 7. It is clear at once that if it be not permitted to identify a point in the æther by the energy localized at it, no other means of identification can be substituted. All optical phenomena prove that the æther (outside material bodies) is perfectly homogeneous, so far as the power to contain energy is concerned; the velocity of radiation is rectilinear and uniform in whatever direction it is propagated. All portions of the æther which contain the same amount of energy are, so far as experiment can tell, perfectly similar, and there is no possible means of distinguishing between them; neither have the boundaries of the æther, if there are such, ever been attained. The first requisite for the application to the æther of the definition of velocity, which is implied in all statements concerning the velocities of material bodies, cannot be fulfilled: until some other definition of velocity is put forward as applicable to the æther, all propositions about velocity of or relative to the æther are meaningless. On the view of the æther which rejects the identification of portions of the æther by their energy-content, the first statement which is made about the velocity of the æther must either be a definition or be wholly devoid [ 186 ] of significance. If a man tells me that his watch weighs 100 grammes, his statement is for me a significant proposition, because the ordinary definition of "weight" can be applied to a watch; but if he tells me that the colour of his watch weighs 100 grammes, and refuses to tell me how a colour is to be weighed, I can only conclude that he is uttering meaningless nonsense, or, if this explanation should be excluded by the fact that be is a learned professor, that he means to inform me that, for some reason which may be quite satisfactory, he wishes me to understand "the colour of his watch" when he says "that which weighs 100 grammes." Accordingly, when one who rejects the principle of relativity, writes down Maxwell's equations, or the simple deduction from them given above, without stating distinctly what is the relative velocity between axes "fixed in the æther," and some material system (relative to which other velocities can be measured), the only meaning which he can convey is that he proposes to call by the term "velocity ${\displaystyle u}$ relative to the æther," the state of motion of a body bearing a charge ${\displaystyle e}$ when its magnetic effect measured by any observer is equivalent to a current element of strength ${\displaystyle eu}$,..... Moreover, it follows that, if he deduces propositions from bis fundamental hypotheses and compares the result with experiment, the only valid information which he can attain by his endeavours is with what velocity relative to the æther (according to his definition) some one or more of the bodies which he observes is moving. He cannot possibly confirm or refute any assumptions which he has made in forming his hypotheses. He is in the position of a mathematician treating equations in which there are one or more unknown variables. The most that he can do is to find tin; values of those variables ; he cannot attain to an identity or non-identity, which will prove that his original equations were either true or false. § 8. It may be suggested that I have overlooked an alternative meaning of "velocity" which can be defined independently of the propositions of electromagnetism. There is a quantity termed "absolute velocity" introduced by dynamics, and it may be thought that it is possible to assert that the velocity of a charged body relatively to the æther is its "absolute velocity." Such an assertion is possible and would remove the objections raised in the last paragraph, but it raises far more serious difficulties. For, as is shown in the paper on the "Principles of Dynamics" in this number of the Magazine, "absolute velocity" (or rather Absolute Velocity) is meaningless unless the fundamental propositions [ 187 ] of dynamics are assumed to be true. Now those propositions state that the mass of a body is independent of its state of motion. When it is deduced from the equations of electromagnetism that the mass of a charged body varies with its motion, the propositions of dynamics are denied to be true, and, accordingly, the term "Absolute Motion" is deprived of all significance. It is logically impossible to assert at the same time (1) that axes fixed in the æther are axes of which the Absolute Velocity is nil, and (2) that the mass of a body increases with its velocity relative to these axes. If one of the two propositions is taken to be true, the other becomes, not false, but meaningless. We must assume, therefore, that adherents to the æther believe that "velocity relative to the æther" is neither velocity measured in the ordinary way, or Absolute Velocity. And since these two meanings of "velocity" are the only two employed in physics outside electromagnetism, we must conclude that the velocity of electromagnetism is a new concept and is defined by the first proposition in which it occurs. Let us investigate the consequences of this conclusion. § 9. There are two classes of well-known observations which lead thus to a determination of the velocity of some body relative to the æther. The first of these, and the most direct, is represented by Rowland's experiment on the magnetic effect of moving charges. Rowland showed that, if a charge ${\displaystyle e}$ was moving with a velocity ${\displaystyle u}$ relative to a system of observing magnets, then the charge was equivalent to a current element ${\displaystyle eu}$. Therefore, and this is the only deduction possible, the velocity of the charge relative to the æther is its velocity relative to the observing system of magnets. The second series of observations concern aberration and the experiment of Michelson and Morley. It can be deduced from the fundamental propositions of electromagnetism that, if the velocity of an observer relative to the æther changes by an amount ${\displaystyle u}$, then the apparent direction of a ray of light seen by the observer is changed through an angle ${\displaystyle {\tfrac {u\sin \theta }{V}}}$, where ${\displaystyle \theta }$ is the angle between the direction of the ray and the direction of ${\displaystyle u}$. Observations on stars show that ${\displaystyle u}$ is the velocity of the earth in its orbit round the sun and ${\displaystyle \theta }$ the angle between that velocity and the direction of the star. On the other hand, observations made on terrestrial sources show that ${\displaystyle u}$ is zero. Accordingly we have to conclude, and this again is the only conclusion possible, that the velocity of the observer relative to the æther is the velocity of the earth in its orbit when stars are considered, and is zero when [ 188 ] terrestrial sources are considered. Our observations prove, as the consideration of the simple facts of electrostatics suggested a priori, that the effective velocity in electromagnetic phenomena is the relative velocity between the acting and "observing" systems; the words "fixed in the æther" mean, for any given observer, "fixed in the system the action of which he is observing." Even if we start from the standpoint of the "ætherialist," observation forces us to accept the principle of relativity. § 10. But believers in the æther refused to accept the logic of their conclusions; they were so obsessed by ideas derived from their constant use of the word, that they would not accept the idea that an observer could have at the same time several different velocities relative to the nether. They talked of "reconciling" the results of aberration and of the Michelson experiment; but there was no "reconciliation" needed. The results formed a perfectly logical whole without, any trace of contradiction. It is true that, if velocity is defined as for a solid material body, a conclusion that one body has several different velocities relative! to another does prove that there has been some fallacy in the argument; but they had defined velocity in a perfectly different way, and there was no reason to suppose that the new definition of velocity would have the same limitations as the old. As well might a mathematician, previously acquainted only with real quantities, who had established a system for the solution of quadratic equations, think there was need for "reconciliation" when first ho encountered an imaginary root. The "reconciliation" which was effected was in truth a revolution and a most disastrous revolution. The "ætherialists" declared that they were going to throw over their old definition and substitute a new one; that this decision was wise everyone will agree, but there will not be agreement as to the wisdom of their new choice. It was now said that the difference between the velocities relative to the æther of any two bodies was equal to their relative velocity, but that the velocity relative to the æther of any body was uncertain to the extent of a constant. They then proceeded to show with great care that no experiment which we could possibly hope to perform, until our appliances attain a perfection of a different order, could give us any information as to the value of that constant; but that if such experiments ever could be performed, there was no reason for supposing that the quantity which was assumed to be constant would actually be found to be constant. And then they settled down with a sigh of satisfaction in the happy conviction that a solution of all the difficulties connected with the [ 189 ] æther had been found which would meet with universal approval. § 11. But the approval has not been universal. M. Poincaré has attacked the scheme on the "round that it needs a fresh assumption whenever the delicacy of our instruments is increased. And it has also occurred to many people that there is something very unsatisfactory in introducing into the fundamental equations of a science a quantity which cannot be measured experimentally, either directly or with the help of those equations. It is probable that the future historian of physics will be astounded that the vast majority of physicists should accept a system of such bewildering complexity and precarious validity rather than abandon ideas which seem to have their sole origin in the use of the word "æther," and reject those to which so many lines of thought point insistently. Unless a perfectly arbitrary assumption is made as to the value of the "velocity of the æther" relative to some observing system, observation forces us to the adoption of the principle of relativity — to the belief that the axes "fixed in the æther," to which Maxwell's equations must be referred, are axes fixed in the charged system which is the source of the energy of which the transformations are investigated. It has been asserted that such ideas are really even less satisfactory than those based on the conception of a single æther, because "they require such a very complex structure for the æther." But if we abandon the use of the word "æther" their essential simplicity appears. The system in which electromagnetic energy is localized ceases to be a single body independent of all material bodies; it becomes a collection of portions which are to be regarded as parts of every separately moving charged body; if the charged body is in uniform motion relative to the observer the portion of the æther in which its energy is localized moves with the same velocity relative to that observer. The principle of relativity does not complicate our interpretation of electrical phenomena; it simplifies it in reducing by one the number of bodies that have to be taken into account. § 12. It would be easy to proceed to attack in like manner other confusions to which the use of the concept "æther" has given rise, to analyse the many and mutually inconsistent attempts which have been made to estimate its density, rigidity, and even atomic weight. My object is not to marshal all the arguments that might be brought against the use of that concept, but only those which appear to me to be especially destructive at the present time. The recent work of Bucherer, and the atomic theories of J. J. Thomson [ 190 ] and Planck (the latter recently developed by Stark[4] so as to resemble the former very closely) will be found very difficult for believers in the æther to assimilate or to explain away; if they attempt to do so it will doubtless be in the belief that the concept of the æther is worth retaining. A demonstration that the case for the æther is ludicrously weak, where it was thought to be strongest, that the concept has never been the source of anything but fallacy and confusion of thought, may serve to expedite its relegation to the dust-heap where now "phlogiston" and "caloric" are mouldering. Note. It is desirable that a few remarks should be made on the relation between this paper and another on "The Principles of Dynamics," which is published in the same number of this Magazine; for it might appear that some statements made above are inconsistent with those made elsewhere. One of these statements is that to which the footnote on p. 184 is appended. In the "Principles of Dynamics" it is pointed out that the velocity which is discussed in physics is almost always Velocity, and that it is not definable immediately in terms of distances and times. (The notation used in this Note is the same as that used in the paper to which reference is made.) I have not reconciled these apparent inconsistencies by adopting the terms employed in the "Principles of Dynamics," which was written considerably later, because it seems to me that the argument as it is stated here, though objectionable in form, is more convincing than it would have been otherwise, and requires less subtlety of thought. But I propose in this Note to point out how it would appear if viewed from the standpoint of the later ideas. The only meaning which is given to the word "velocity" in scientific discussion, which can be stated without assuming the truth of a scientific theory, is ${\displaystyle {\tfrac {dr}{dt}}}$, where ${\displaystyle r}$ and ${\displaystyle t}$ are Distances and Times bearing relation A to distances and times. Other quantities, like Absolute Velocity, which are called velocities because they are related in a certain way to Relative Velocities, can only be defined by stating the relation in the form of an equation, which is, in fact, the expression of a scientific theory. If we reject the identification of particles of the æther by their energy-content, we reject the possibility of measuring the distance of such a particle from any other particle, and consequently of defining the Relative [ 191 ] Velocity of such a particle by means of relation A. The "velocity of the æther," defined by those who reject this possibility, must be meaningless without assuming the truth of the first theory in which it is mentioned (Maxwell's equations), just as the quantity "b" is meaningless without assuming the truth of Van der Waals's equation. On solving the equations by which the "velocity of the æther" is defined, it is found that different values for this velocity for any particle are found in different cases — a conclusion which shows that this "velocity" has properties different from those of Relative Velocity. It would be analogous if the quantity "b" were found to be negative or imaginary, showing that "b" has different properties from those attributed to a Volume by definition. In the last case two alternatives would be open: the conclusion might be accepted, or a new theory might be stated which would lead to a different conclusion. In the case of the "æther," all arc agreed that the conclusion is to be rejected and a new theory stated. Adherents to the principle of relativity point out that a new theory can be stated which avoids all necessity for any such quantity as "velocity of the æther": it can be stated in terms of quantities which arc related to measurements by relation A alone. The "ætherialists," on the other hand, propose a new theory which introduces again a quantity of the same nature as before, but avoid the possibility of the occurrence of fresh undesirable conclusions about it by stating the theory so that the value of the quantity cannot be found by any experiment that is ever likely to be performed. My contention is that the former procedure is the more satisfactory, and to what I have said I will only add one argument derived from the analogy of dynamics. I imagine that physicists would agree that it" dynamics could be stated in terms of Relative Motion only, without complicating the equations so that they would be unamenable to mathematical treatment, that course should certainly be adopted. "Absolute Motion" is a disagreeable necessity forced on us by the insufficiency of our powers of mathematical treatment. The case against "velocity of the æther" is stronger than that against Absolute Motion, for we can find the value of Absolute Velocity by assuming the truth of the equations by which it is defined, and we cannot find the value of "velocity of the æther," even by assuming the truth of those equations. On the other hand, there is no argument in favour of "velocity of the æther" derived from the necessities of mathematics, because the equations based on the principle of relativity are just as simple as those based on the conception of the æther. September 1909. 1. Communicated by the Author. 2. Note.—It may he pointed out that the gist of the argument is contained in chap. xiv. of 'Modern Electrical Theory' (Cambridge, 1907), and in an article in the 'New Quarterly Review' No. 3. 3. See note at the end of the paper. 4. J. Stark, Phys. Zeit. Sept. 1909, p. 570. This work is in the public domain in the United States because it was published before January 1, 1924. The author died in 1949, so this work is also in the public domain in countries and areas where the copyright term is the author's life plus 60 years or less. This work may also be in the public domain in countries and areas with longer native copyright terms that apply the rule of the shorter term to foreign works.
2019-02-24 04:30:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 19, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7424620389938354, "perplexity": 448.0753273649619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249578748.86/warc/CC-MAIN-20190224023850-20190224045850-00122.warc.gz"}
https://physics.stackexchange.com/questions/111074/supernovae-and-black-holes/111077
# Supernovae and black holes? I think i am correct in saying that a supernova ($Type$ $II$) is caused by the collapse of the core of a giant star. This contraction of the core is stopped by the Pauli exclusion principle and the core becomes rigid. The outer layers now rebound of the rigid core and are thrown into space as a supernovae. But presumably this leaves behind a neutron star and not a black hole. For a black hole to form the Pauli exclusion principle would not be strong enough to stop the collapse? If so what causes the outer layers to rebound, as there is no ridged object for them to rebound off? What I am basically asking is when a black hole forms what mechanism causes the supernovae? • Take a look at the table here on Wikipedia. It is slightly more speculative than it advertises, but you can see that most blace-hole-forming scenarios do not lead to SNe. Some may lead to GRBs, but these are powered by jets rather than spherical shocks. – user10851 May 3 '14 at 22:50
2019-07-21 00:25:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47755199670791626, "perplexity": 364.6290365693957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526799.4/warc/CC-MAIN-20190720235054-20190721021054-00405.warc.gz"}
http://www.flyingcoloursmaths.co.uk/basic-maths-skills-how-estimating-saved-me-nearly-30/
# Basic Maths Skills: How Estimating Saved Me Nearly $30 It wasn’t a big grocery run – in those days, I was a single bloke, living alone; a loaf of bread, a frozen pizza, a jar of coffee… “Dinner,” as I called it. Oh and a few Lindt balls. You know them – the little chocolate balls you get at Christmas, with some sort of tasty filling. I liked the white chocolate ones, but I’ve cut down. They’re about the only edible chocolate on sale in America. “Beep,” said the checkout; “did you find everything you were looking for?” said the lady behind it. I nodded to both. “That’ll be \$47,” said the checkout lady, and I scratched my head. “Are you totally sure about that?” She pointed at the screen. Sure enough, it said \$47. I put away the \$20 bill I had prepared and got my debit card out. “Sorry to be a pain,” I said – one of the benefits of having a British accent in the US is that you can get away with a spot of the Hugh Grants – “but could I see the receipt?” Why was I making a fuss? Well, it was because I’d run the sums. I knew perfectly well that even the top-of-the-line coffee I’d picked wouldn’t have taken the total over \$20. Three bucks for the bread, six for the pizza, seven for the coffee, one for the chocolate, give or take. Ever since I was a flat-broke student, I’ve always had the habit of keeping a running total, to the nearest pound or dollar or so, as I go around shops – it helps me to have the right money ready. “What are these three \$9.99 items?” I asked. My shopping was still clearly visible in the bagging area. There were no \$9.99 items to be seen, let alone three. The checkout lady’s face reddened. “The Lindt balls are three for a dollar, right?” I’m not sure how the chocolates had rung up at \$9.99 each. Perhaps it was a system error on the till (more than likely; this particular supermarket was still getting to grips with electricity, so computers were a bit of a leap.) In any case, the lady apologised profusely, corrected the mistake and revised the total to \$18.03. “There’s \$20,” I said smugly; basic maths skills had saved the day again. ## Colin Colin is a Weymouth maths tutor, author of several Maths For Dummies books and A-level maths guides. He started Flying Colours Maths in 2008. He lives with an espresso pot and nothing to prove.
2018-02-22 00:52:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6669010519981384, "perplexity": 2887.2277745628003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813832.23/warc/CC-MAIN-20180222002257-20180222022257-00126.warc.gz"}
https://blender.stackexchange.com/questions/2892/is-it-possible-to-put-an-if-statement-into-the-scripted-expression-of-a-driver
# Is it possible to put an if statement into the scripted expression of a driver? If the y location of a certain object is greater than 0, I want to add a fixed amount to another object's driven channel. How do I achieve that? valueIfTrue if isConditionTrue else valueIfFalse This is known as a ternary conditional operator see:
2021-12-02 04:04:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4000992178916931, "perplexity": 886.0161661173627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.69/warc/CC-MAIN-20211202024322-20211202054322-00205.warc.gz"}
https://www.physicsforums.com/threads/heat-loss-rate-directionality-sign-convention.549624/
# Heat loss rate directionality/sign convention 1. Nov 11, 2011 1. The problem statement, all variables and given/known data 2. Relevant equations $H=-k A \frac{\Delta T}{\Delta x}$ kconcrete = 1.0 W / m K (approx) 3. The attempt at a solution $H=\frac{-1.0 (8 * 12) (10-20)}{0.23} = 4.17$ kW My problem is that the book (Essential University Physics, Wolfson) has the answer as -4.17 kW, but that would imply that heat is being transferred from the ground to the house, which it clearly cannot be from the temperature of the house being larger than the ground's temperature. I usually have problems with the sign of this value of H, how do I know what direction they are asking for by "through" an object - it could be either. Would either value be acceptable in an exam? Thanks. 2. Nov 11, 2011 ### RTW69 Are you sure the sign on delta X is positive? What is your coordinate system for the horizontal slab? Is x=0 the 10 or 20 C surface? 3. Nov 12, 2011 ### technician First of all you have given K to be 1.0W/mK. which means 1000W/K to be used in the equation . This gives the answer to the equation to be 4.17kW. As to which direction, heat flows from hot to cold. Sometimes worrying about the signs that crop up in equations can be confusing..... they are never 'wrong' but don't forget common sense. 4. Nov 13, 2011 ### technician Sorry! My mistake, mK is correct I misread it as milliKelvin and not as m.K Ignore my previous response
2019-01-16 17:58:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.584521472454071, "perplexity": 1132.9033928492659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657557.2/warc/CC-MAIN-20190116175238-20190116201238-00203.warc.gz"}
http://ieeexplore.ieee.org/xpls/icp.jsp?reload=true&arnumber=6579629
By Topic • Abstract SECTION I INTRODUCTION Widely tunable semiconductor lasers are attractive for various applications in, for example, today's dense wavelength division multiplexed (DWDM) systems and broadband sensors [1]. Tunable lasers are also key components in future optical packet and burst (OPS/OBS) switching systems to reduce the latency and increase the capacity of current optical transmission networks [2], [3]. Several typical tunable lasers have been demonstrated including the sampled-grating (SG) or superstructure grating (SSG) distributed Bragg reflector (DBR) lasers [4], [5], the grating assisted co-directional coupler with rear sampled grating reflector (GCSR) lasers [6], the digital supermode (DS) DBR lasers [7] with phase gratings [8], [9], the binary superimposed grating (BSG) DBR lasers[10] and the modulated grating Y-branch (MGY) lasers [11]. In addition, to provide a “flat-top” comb reflector which is ideal for tunable lasers, we have proposed the wavelength tunable digital concatenated grating (DCG) DBR laser [12]. In this paper, by combining the multiple reflection spectrum concatenation [12] and multiple phase shifts [13] technologies, we proposed a widely tunable four-section DBR laser based on the DCG with multiple phase shifts (MPS-DCG). The static characteristics of the proposed MPS-DCG DBR laser are simulated and analyzed with a time-domain traveling-wave model. The MPS-DCG can provide periodic multiple high reflection peaks in the wavelength range of over 100 nm. Compared to the previous DCG design, the MPS-DCG provides a narrower 3 dB bandwidth of reflection peaks which will help to increase mode selectivity. Furthermore, the grating period difference of the MPS-DCG is slightly bigger than that of the DCG, which will help to relieve the stringent requirements on fabrication tolerance of the grating. SECTION II GRATING DESIGN AND LASER STRUCTURE To provide a “flat-top” comb reflector for DBR type tunable lasers, we previously proposed a DCG structure [12], and then fabricated the grating by using the nanoimprint lithography (NIL) technology [14]. Fig. 1(a) and (b) show the SEM photography of the DCG pattern on resist after the nanoimprint processing and the inductively coupled plasma (ICP) etched DCG, respectively. In Fig. 1(b), the periods of concatenated grating segments are approximately 224 nm, 228 nm, and 232 nm. In practice, we have to carefully control the fabrication tolerance as the period difference of concatenate Bragg gratings is approximately 4 nm. Besides, the 3 dB bandwidth of reflection peaks of a DCG is not narrow enough, which may result in reduced SMSR and mode instability. This can be observed from the measured lasing spectra of a demonstrated DCG-DBR laser in Fig. 2. Clearly, discrete wavelength tuning of about 32 nm is obtained with six different supermodes. We can see from Fig. 2, although, the SMSRs of these six supermodes are larger than 30 dB, the side modes are not well suppressed possibly due to the relatively large comb reflection peak 3 dB bandwidth of the DCG. Fig. 1. SEM photographs of the fabricated DCG. (a) DCG pattern on resist after nanoimprint; (b) the ICP etched DCG. Fig. 2. Superimposed spectra of six DCG-DBR laser supermodes. For the above reasons, here we use the multiple phase shifts technology to improve DCG, i.e., MPS-DCG. The schematic structure of the MPS-DCG is shown in Fig. 3. A MPS-DCG consists of $M$ grating segments with different uniform periods, $\Lambda_{1}, \Lambda_{2}, \ldots$, and $\Lambda_{M}$ in one sampling period. $Z_{g}$ and $Z_{s}$ are length of grating segments and sampling period, respectively, and $Z_{s} = MZ_{g}$. The phase shift between the $k$th and $(k + 1)$th sampling period is $\varphi_{k} = 2\pi k/m$, where $m$ is the phase shift factor. A MPS-DCG consists of $M$ multiple-phase-shifts sampled gratings. By carefully designing MPS sampled grating segments periods, the reflection spectrum envelope of each grating segment is concatenated [15], and by increasing $m$, channel-counts within the 3 dB reflection envelope bandwidth could be densified. Therefore, by reducing the sampling period $Z_{s}$ to $Z_{s}/m$, the 3 dB bandwidth of reflection spectrum envelope can be increased by a factor of $m$ without changing the peak spacing. Fig. 3. Schematic structure of a MPS-DCG. In our design, in order to concatenate the sub-grating reflection spectrum envelope, the Bragg period of the $i$th sub-grating satisfies the following equations [15]: TeX Source \eqalignno{\Lambda_{i} - {\lambda_{c} \over 2n_{\rm eff}} = &\, \cases{{H \over 2n_{\rm eff}}\left(i - {M + 1 \over 2} \right)\Delta\lambda, & m: odd \cr {H \over 2n_{\rm eff}}\left(i - {M + 1 \over 2}\right)\Delta\lambda + {\Delta\lambda \over 4n_{\rm eff}}, & m: even} \qquad i = 1, 2, \ldots, M&\hbox{(1)}\cr \Delta \lambda = &\, {1 \over m}{\lambda_{c}^{2} \over 2n_{\rm eff}Z_{s}}&\hbox{(2)}} where $\lambda_{c}$ is the designed center wavelength of the reflection spectrum, $n_{\rm eff}$ is the effective refractive index, $\Delta\lambda$ is the channel spacing of MPS-DCG and $H$ is a positive coefficient. The optimal $H$ is equal to $m \times M$ to obtain reflection spectrum with wide wavelength range and flat peak reflectivity [15]. Fig. 4 shows the calculated reflection spectra of the MPS-DCG and DCG. The results in Fig. 4(a) indicate that by employing multiple phase shifts technology, the 3 dB bandwidth of reflection spectrum envelope can be increased. When $m = 2$, $M = 3$, multiple reflection peaks are observed in over 120 nm range with a high reflectivity of 60% $\sim$ 80%. The peak 3 dB bandwidths shown in Fig. 4(b) are 0.92 nm and 1.2 nm for MPS-DCG and DCG, respectively. The narrower peak bandwidth of the MPS-DCG has the advantages in good SMSR and mode stability. The corresponding structure parameters of the MPS-DCG and DCG are listed in Table 1. The grating coupling coefficient is 100 $\hbox{cm}^{-1}$. As shown in this table, the grating period differences between grating segments of the MPS-DCG and DCG are approximately 8.4 nm and 4.2 nm, respectively. As the MPS-DCG has a bigger period difference, it is easier to control in fabrication. Fig. 4. (a) Reflection spectra of the MPS-DCG, $m = 2$, $M = 3$ and DCG, $M = 3$; (b) Peak 3 dB bandwidth at wavelength around 1550 nm. TABLE 1 STRUCTURE PARAMETERS OF THE MPS-DCG AND DCG Fig. 5 shows the schematic structure of the wavelength tunable MPS-DCG DBR laser. The MPS-DCG reflectors are formed in the front and rear passive sections. There is a slight difference of reflection peak spacings between the front and rear MPS-DCG reflectors to extend the tuning range based on the Vernier mechanism [4]. The two grating reflectors consist of three grating segments with different Bragg periods in one sampling period, which can be calculated from (1) and (2), and the phase shift factor $m$ is chosen to be 2. The structure parameters of the front and rear MPS-DCGs are listed in Table 2, and the grating coupling coefficient is 100 $\hbox{cm}^{-1}$. The corresponding reflectivities are shown in Fig. 6. This device has a designed tuning range of 88 nm. The length of the active and phase sections are 400 $\mu\hbox{m}$ and 100 $\mu\hbox{m}$, respectively. Both facets of the MPS-DCG DBR laser are AR-coating. Fig. 5. Schematic structure of the MPS-DCG DBR laser. TABLE 2 STRUCTURE PARAMETERS OF THE FRONT AND REAR MPS-DCGS Fig. 6. Reflectivities of the front and rear MPS-DCGs. SECTION III NUMERICAL RESULTS AND DISCUSSIONS The characteristics of the MPS-DCG DBR laser are simulated using the time-domain traveling-wave model [16] combined with a digital filter method [17], [18]. The material and physical parameters used in the simulation are given in Table 3. TABLE 3 MATERIAL PARAMETERS USED IN SIMULATION Fig. 7(a) and (b) show the wavelength and the SMSR tuning curves as a function of the currents across the front and rear grating sections, respectively. The active and phase sections are biased at 75 mA and 0 mA, respectively. As shown in Fig. 7(a), by tuning the front current from 0 mA to 30 mA, the mode hops to higher wavelength, after that a cycle jump of the lasing wavelength from 1581.8 to 1502 nm is observed. The mode hop spacing is about 8.9 nm, which is approximately equal to the peak spacing of rear grating due to the Vernier tuning mechanism [4]. In a similar way, by tuning the rear current from 0 mA to 40 mA, the mode hops to lower wavelength, and a wavelength cycle jump from 1509.8 to 1589.9 nm is observed with a mode spacing of around 9.9 nm, which is approximately equal to the peak spacing of the front grating. It is clear in Fig. 7(a) and (b) that the good SMSR (> 40 dB) can be achieved in the center area of each mode, while SMSR decreases at the mode boundaries. The 3-D wavelength tuning map is plotted in Fig. 8. It shows that the maximum tuning range of the device is approximately 90 nm, covering a wavelength range from 1500 nm to 1590 nm, which is slightly larger than the designed value. Fig. 7. Tuning curves of: (a) front grating section, with ${\rm I}_{\rm r} = 29.5\ \hbox{mA}$; (b) rear grating section, with ${\rm I}_{\rm f} = 6\ \hbox{mA}$. Fig. 8. Three-dimension wavelength tuning map. As shown in Fig. 9, when the active section and phase section are biased at 75 mA and 0 mA, respectively, by tuning the current applied on the front and rear grating sections, the output power from the front MPS-DCG section facet varies from approximately 7.5 dBm to 10.8 dBm, which is predicted to be larger than that of the SG-DBR lasers [4]. This is because that a shorter grating is needed to provide sufficient high reflectivity for the MPS-DCG, and thus reducing optical loss in front section. However, as the thermal effects are not included in this model, the output power might be overestimated, especially when large current is injected into the active section. This is because that with the increasing of the active region current, the active region temperature increases as well, and the leakage and nod-radiative combination become more significant, which reduce the output power. Since the output optical field has to pass through the front MPS-DCG section, this causes a power variation of 3.3 dB when increasing the injected currents in grating sections. A SOA integrated in the front MPS-DCG section could reduce output power variation and improve the output power further. Fig. 9. Contour of the output power. In Fig. 10, the light output power is shown for three wavelength channels as a function of the current injected on active section. No kinks are observed over the tuning range from 40 mA to 120 mA. The results indicate that the threshold currents of the three different wavelengths are around 33 mA. We attribute this to the relatively low reflectivity of the front grating section as shown in Fig. 6. By increasing the reflectivity of the front MPS-DCG, the threshold current could be reduced as well as the output power. Therefore, a design tradeoff is found between the threshold current and the output power of the laser. Fig. 10. P-I curves of the MPS-DCG DBR laser. SECTION IV CONCLUSION We have proposed a new widely tunable DBR laser based on digital concatenated grating with multiple phase shifts. The static characteristics of the MPS-DCG DBR laser were analyzed. The tuning range of the laser was approximately 90 nm, while maintaining a high SMSR (> 40 dB) in the center of each mode. As the MPS-DCG has a high reflection efficiency, short gratings are needed to provide enough reflectivity. High output power could be easily obtained. Moreover, the MPS-DCG can be applied to other laser structures such as the GCSR laser or the DS-DBR laser as it can produce broadband periodic multiple high reflection peaks with a simple structure. These results show that the proposed device could be a promising candidate for future tunable laser sources. Footnotes This work was supported in part by the International S&T Cooperation Program of China under Grant 1016, by the National Natural Science Foundation of China under Grant 11174097, and by the Specialized Research Fund for the Doctoral Program of Higher Education (SRFDP) under Grant 20100142110045. Corresponding author: Y. Yu (e-mail: yonglinyu@mail.hust.edu.cn). References No Data Available Cited By No Data Available None Multimedia No Data Available This paper appears in: No Data Available Issue Date: No Data Available On page(s): No Data Available ISSN: None INSPEC Accession Number: None Digital Object Identifier: None Date of Current Version: No Data Available Date of Original Publication: No Data Available
2016-10-01 08:53:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6252996325492859, "perplexity": 2135.7195884727466}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662698.85/warc/CC-MAIN-20160924173742-00123-ip-10-143-35-109.ec2.internal.warc.gz"}
http://community.wolfram.com/groups/-/m/t/1177359
# New SystemModeler Example: Hare-Lynx, Interactively Explore Dynamics GROUPS: Patrik Ekenberg 5 Votes A new example has been added to the collection of ready-made example models available on the SystemModeler site: Hare-Lynx: Interactively Explore Population Dynamics. The model requires the free SystemDynamics Modelica library, which can be downloaded from here.The model uses predator-prey equations, also known as Lotka-Volterra equations. It aims to describe a biological system consisting of two species, a predator (here lynx) and a pray (here hares) and the interaction between them. A large number of predators in the system will increase the mortality of the prey due to predation. A low number of prey will decrease the viability of the predator due to starvation.While they can be useful for describing phenomenon in nature, what is very interesting about these systems is their mathematical properties. Solving the system will yield a periodic solution that is something akin to a harmonic motion: Needs["WSMLink"] WSMPlot["HareLynxPopulation", "Population Levels", Ticks -> None] The predator is lagging the population growth of the prey by 90° in the period solution, but otherwise the populations are stable over time.There are important parameters that affect these equations. The initial conditions are one of them, the rate at which hares and lynx are born is another. If we want to explore these four parameters, we can make use of one of SystemModeler 5s latest features, WSMParametricSimulateValue.In essence, what it does is it treats a SystemModeler model just as any other Wolfram Language function. You give it a set of arguments and from the function you get the trajectories of some variables over time. It is easy to set up these functions, you just give it the model name, the variables that you want to get back when you call the function, and the parameters that you want to give as arguments: sim = WSMParametricSimulateValue["HareLynxPopulation", {"hare", "lynx"}, {"HareInitialPopulation", "LynxInitialPopulation", "HareBirthFactor", "LynxBirthFactor"}, WSMProgressMonitor -> False] Now you can call the function with arbitrary arguments and get back a result: sim[30000, 2000, 1.25, 0.25] You can use this directly to plot the two populations: Map[ListLinePlot, sim[30000, 2000, 1.25, 0.25]] Of even more interest could be to look at a parametric plot of the system, with the population of hares (x-axis) versus the population of lynx (y-axis). ParametricPlot[ Evaluate[Through[sim[30000, 2000, 1.25, 0.25][t]]], {t, 0, 15}, AspectRatio -> 1, PlotRange -> {{0, Automatic}, {0, Automatic}} ] Since it works as any other Wolfram Language function, you can put it inside of a manipulate. For example, here you can explore how the model responds to changing the birth rates for the two species: Manipulate[ ParametricPlot[ Evaluate[Through[sim[30000, 2000, hareBirthFactor, lynxBirthFactor][t]]], {t, 0, 15}, AspectRatio -> 1, PlotRange -> {{0, Automatic}, {0, Automatic}} ], {{hareBirthFactor, 1.25}, 1.2, 1.5}, {{lynxBirthFactor, 0.25}, 0.2, 0.5}, ContinuousAction -> False ] `The downloadable example shows how a much more advanced interactive exploration can be constructed in the same vein. Here is a sneak peak of it using a locator pane:Learn about this and many more new SystemModeler 5 features on the What's New page. Attachments:
2018-02-19 06:17:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2861391603946686, "perplexity": 1489.2424187029055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812405.3/warc/CC-MAIN-20180219052241-20180219072241-00131.warc.gz"}
https://www.hanspub.org/journal/PaperInformation.aspx?paperID=30124
# 月亮为什么远离地球?Why Is the Moon Getting Further Away from Earth? • 全文下载: PDF(586KB)    PP.149-161   DOI: 10.12677/MP.2019.93017 • 下载量: 416  浏览量: 1,045 Why is the Moon getting further away from Earth? The first reason that comes to mind is, of course, the expansion of the universe, but the calculations according to Hubble’s law do not match these observations. Through the theory of space-time ladder, we have established the expansion formula of the planets and satellites in the solar system, and the calculation results are basically consistent with the observations. Moreover, when explaining the expansion formula, the similar atoms fine structure constants in the solar system are found. This constant is related to the contraction of the Energy QI field. It is the constant that causes the expansion speed of the planet to decrease in the form of an equiangular spiral, so the expansion velocity of the planets in the solar system differs greatly from the recessional velocity in Hubble’s Law. Through this calculation, it is found that although both the atom and the solar system are contracted in the form of the physical space-time, there is a difference. The atom shrinks in contraction, which is a complete contraction, while the solar system expands in contraction. This is consistent with the prediction of the space-time ladder theory. The contraction of atoms and the expansion of the universe correspond to each other. The increase of the atomic fine structure constant proves that the atom is shrinking, and the atomic contraction seems to be that the universe is expanding in the space-time ladder theory. These results prove that the space-time ladder theory is correct. In addition, this study found that the interpretation of force by space-time ladder theory is equivalent to the interpretation of force by quantum electrodynamics (QED), and is based on fine structure constants. 1. 引言 1994年,Dickey等人在《科学》杂志上发表论文,表明月亮远离地球的速度是每年3.82 ± 0.07厘米 [1] 。 2. 时空阶梯理论简介 ${F}_{1}=q\left({E}_{1}+{v}_{1}×B\right)$ $F=m\left(E+v×Q\right)$ ${v}_{1}={H}_{0}D$ ${v}_{2}=\frac{RQ}{\mathrm{sin}\theta }$ 3. 按照哈勃定律计算 Table 1. The theoretical calculation distance of the eight planets away from the sun Table 2. The actual observation distance of the eight planets away from the sun 4. 经验公式的建立 $v=QD$ v是行星(地球等)膨胀的速度,或者卫星(月亮等)膨胀的速度,Q是气感应强度,D是行星到太阳的距离,或者月亮到地球的距离。 ${k}_{1}v=QD\left(\frac{c}{{v}_{1}}\right)$ $v=\frac{{H}_{0}D}{{k}_{1}}{\left(\frac{c}{{v}_{1}}\right)}^{2}$ $v=\frac{{H}_{0}D}{{k}^{4}}{\left(\frac{c}{{v}_{1}}\right)}^{2}$ 5. 验证经验公式 Table 3. Average distance from the eight planets to the sun and average orbital speed Table 4. Distance of eight planets away from the sun calculated using only one expansion coefficient Table 5. Expansion coefficient calculated according to the actual distance Table 6. Comparison between theoretical expansion coefficient and actual expansion coefficient Table 7. The difference between theory and actual distance $v=\frac{{H}_{0}D}{{k}^{4}}{\left(\frac{c}{{v}_{1}}\right)}^{2},k=306.697266806281{\text{e}}^{\left(\frac{n\text{π}}{18}\right)},\left(n=1,2,3,4,5,6,7,8,\cdots \right)$ 6. 对经验公式的初步解释 7. 精细结构常数的意义 e是基本电荷, ${\epsilon }_{0}$ 是真空电容率, $\hslash$ 是约化普朗克常数,是普朗克常数, c是光速。 “狄拉克方程认为光谱的精细结构是由电子的自旋–轨道作用引起的,是一种相对论效应”。相对论效应,在时空阶梯理论看来,就是给形而下时空一个划了一个界限,就是光速和光速以下的是形而下时空,形而下时空不可能超光速,而且形而下时空是收缩的,收缩是是逐渐增强的。狄拉克方程具有更丰富的形而下时空阶梯的内容,所以,狄拉克方程可以描述更丰富的粒子和粒子的自旋 [14] 。 $F=\frac{m{c}^{2}}{r}\frac{{c}^{n}}{{v}_{1}{v}_{2}{v}_{3}\cdots {v}_{n}}$ 1) 原子的精细结构常数变大,说明宇宙膨胀。 2) 原子的精细结构常数变小,说明宇宙收缩。 3) 原子的精细结构常数不变,说明宇宙处于动态平衡状态。就是说,宇宙既不膨胀,也不收缩,或者说宇宙既膨胀又收缩,但是,膨胀和收缩处于动态平衡状态。 8. 宇宙以等角螺线的方式展开 9. 总结 [1] Dickey, J.O., Bender, P.L., Faller, J.E., Newhall, X.X., Ricklefs, R.L., Ries, J.G., Shelus, P.J., Veillet, C., Whipple, A.L., Wiant, J.R., Williams, J.G. and Yoder, C.F. (1994) Lunar Laser Ranging: A Continuing Legacy of the Apollo Program. Science, 265, 482-490. https://www.hq.nasa.gov/alsj/LRRR-94-0193.pdf https://doi.org/10.1126/science.265.5171.482 [2] Why the Moon Is Getting Further Away from Earth. BBC News (BBC.com). https://www.bbc.com/news/science-environment-12311119 [3] Is the Moon Moving Away from the Earth? When Was This Dis-covered? (Intermediate). Ask an Astronomer at Cornell University. http://curious.astro.cornell.edu/about-us/37-our-solar-system/the-moon/the-moon-and-the-earth/111-is-the-moon-moving-away-from-the-earth-when-was-this-discovered-intermediate [4] Why Is the Moon Drifting Away from Earth? Quora. https://www.quora.com/Why-is-the-Moon-drifting-away-from-Earth [5] 研究发现月亮正远离地球将越来越暗. 光明日报. http://tech.ifeng.com/discovery/astronomy/detail_2013_10/08/30111756_0.shtml, 2013-10-08. [6] 程亦之. 为什么说月球离我们越来越远?https://zhuanlan.zhihu.com/p/30536395 [7] xp883net. 月亮为什么会渐渐远离地球? https://zhidao.baidu.com/question/45303744.html [8] Earth’s Tides Are Shoving the Moon Away Faster. https://www.newscientist.com/article/mg21829184-700-earths-tides-are-shoving-the-moon-away-faster [9] 星协. 月球正在远离地球, 速度比想象的还要快! http://www.sohu.com/a/277372131_100153806 [10] Krasinsky, G.A. and Brumberg, V.A. (2004) Secular Increase of Astronomical Unit from Analysis of the Major Planet Motions, and Its Interpretation. Celestial Mechanics and Dynamical Astronomy, 90, 267-288. https://link.springer.com/article/10.1007/s10569-004-0633-z https://doi.org/10.1007/s10569-004-0633-z [11] Pearson, E. (2018) Solar System’s Orbits Are Expanding. Sky at Night Mag-azine (The Biggest Name in Astronomy). http://www.skyatnightmagazine.com/news/solar-systems-waistline-expanding [12] Genova, A., Mazarico, E., Goossens, S., Lemoine, F.G., Neumann, G.A., Smith, D.E. and Zuber, M.T. (2018) Solar System Expansion and Strong Equivalence Principle as Seen by the NASA MESSENGER Mission. Nature Communications, 9, Article No. 289. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5773540/ https://doi.org/10.1038/s41467-017-02558-1 [13] Miura, T., Arakida, H., Kasai, M. and Kuramata, S. (2009) Secular Increase of the Astronomical Unit: A Possible Explanation in Terms of the Total Angular Momentum Conservation Law. Publications of the Astronomical Society of Japan, 61, 1247-1250. https://arxiv.org/pdf/0905.3008.pdf [14] 常炳功. 时空阶梯理论合集: 物质∙暗物质∙暗能量[M]. 武汉: 汉斯出版社, 2018. [15] Gomez-Valenta, A. and Amendolab, L. (2018) H0 from Cosmic Chronometers and Type Ia Supernovae, with Gaussian Processes and the Novel Weighted Polynomial Regression Method. Journal of Cosmology and Astroparticle Physics, 2018. https://arxiv.org/pdf/1802.01505.pdf https://doi.org/10.1088/1475-7516/2018/04/051 [16] 从守民, 杨一军. 精细结构常数[J]. 淮北师范大学学报(自然科学版), 1995, 16(1): 61-63. [17] 温伯格. 我为什么对量子力学不满意. 环球科学, 2016年11月23日. [18] Johnston, H. (2010) Changes Spotted in Fundamental Constant. Physics World. [19] Webb, J.K., King, J.A., Murphy, M.T., Flambaum, V.V., Carswell, R.F. and Bainbridge, M.B. (2010) Indications of a Spatial Variation of the Fine-Structure Constant. Physical Review Letters, 107, Article ID: 191101. https://doi.org/10.1103/PhysRevLett.107.191101 [20] King, J.A. (2010) Searching for Variations in the Fine-Structure Constant and the Proton-to-Electron Mass Ratio Using Quasar Absorption Lines. PhD Thesis. University of New South Wales, Syd-ney. [21] Dennis, O. (2017) Cosmos Controversy: The Universe Is Expanding, but How Fast? The New York Times. [22] Tim, R. (2016) Universe Is Expanding up to 9% Faster than We Thought, Say Scientists. The Guardian.
2020-02-29 13:29:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 52, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5535520315170288, "perplexity": 4439.477743400867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875149238.97/warc/CC-MAIN-20200229114448-20200229144448-00304.warc.gz"}
https://mathematica.stackexchange.com/questions/179157/get-the-number-of-specified-elements-from-the-sparsearray-properties
# Get the number of specified elements from the SparseArray properties In the sparse matrix, we have a list of properties: specified elements, dimension, default, and density. I am interested in getting this data without calculating them again. For an example, the following sparse matrix has four specified elements. s = SparseArray[{{1, 1} -> 1, {2, 2} -> 2, {3, 3} -> 3, {1, 3} -> 4}] I can calculate the number of specified elements of s by Length@s["NonzeroValues"] but I look for an option to get this information from s properties • From the previous question, you know all the properties available. The length of "NonzeroValues" is not available, but you can calculate it, as you just did. Why is Length@s["NonzeroValues"] not an acceptable solution? I'm afraid it is not clear what fundamentally is the question you are asking. – rhermans Jul 29 '18 at 11:25 • I'm voting to close this question as off-topic because the OP already gave the optimal solution. – Henrik Schumacher Jul 29 '18 at 13:24 • At least if s well-formatted (e.g., if SparseArraySparseArraySortedQ[s] returns True), s["RowPointers"][[-1]] is a second possibility. But Length[s["NonzeroValues"]] is much safer. – Henrik Schumacher Jul 29 '18 at 13:28 • Also Length won't compute anything. s["NonzeroValues"] is a basic constitutent of a sparse array. Within the internal data type for sparse arrays (MSparseSparseArray), s["NonzeroValues"] is stored as a dense array (MTensor). The length (and dimensions) of a dense array are also stored in each MTensor object, so Length just reads off a value of a field of the MTensor. – Henrik Schumacher Jul 29 '18 at 13:39 • @HenrikSchumacher Shouldn't that be a clear sign that OP should self-answer and wait to see if a better answer comes along, rather than a close vote? – eyorble Jul 29 '18 at 21:05 You want a convenient accessor for the nonzero value count, which seems reasonable. You could define your own function, but you'd like to get it from properties instead. I think this is a good use case for adding your own definition to a system function, which is allowed by the language. You just need to unprotect the symbol first, Unprotect[SparseArray]; (s_SparseArray)["NonzeroValueCount"] := Length[s["NonzeroValues"]]; Protect[SparseArray]; It can be accessed like any other property now, s["NonzeroValueCount"] (* 4 *) As an alternative to what you have already considered there is the property "Density": slen = #["Density"] Times @@ Dimensions[#] &; SparseArray[{{1, 1} -> 1, {2, 2} -> 2, {3, 3} -> 3, {1, 3} -> 4}] // slen 4. • That seems a very convoluted way around Length@#["NonzeroValues"]&. – rhermans Jul 29 '18 at 13:53 • @rhermans Perhaps, but it is faster should that be important. I'll add an example. EDIT No, I'm wrong. I thought I'd looked at this before; it's not. Well there's egg on my face. :^) – Mr.Wizard Jul 29 '18 at 13:54 • SparseArraySparseArrayDimensions for speed then. – rhermans Jul 29 '18 at 13:55 • @rhermans I don't believe I've ever used that function before; let me try it. Thanks! – Mr.Wizard Jul 29 '18 at 13:56 • @Henrik I saw your comment after posting and it makes perfect sense now. I think what I was remembering that nonzero-Positions is a nontrival operation and would take measurable time, and confused that with values. I rarely if ever use "NonzeroValues" but "NonzeroPositions" was a very common tool for me in v7 before Pick` was optimized. – Mr.Wizard Jul 29 '18 at 14:10
2019-06-26 17:13:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5232667326927185, "perplexity": 1403.3126174205988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000367.74/warc/CC-MAIN-20190626154459-20190626180459-00180.warc.gz"}
https://www.risk.net/market-access/risk-management/7946356/what-drives-the-convertible-bond-market
# What drives the convertible bond market? Convertible bond issuance has been somewhat lacklustre since the financial crisis that began in 2007–08. However, over the past couple of years as economies bounced back from the Covid-19 pandemic and stock markets rallied, issuance has surged. Dmitry Pugachevsky, director of research at Quantifi, a provider of risk, analytics and trading solutions, provides an overview of this instrument, modelling aspects, the reasons for its resurgence and whether the trend will continue in light of the current inflationary environment ### Overview The convertible bond is one of the more venerable instruments still in use in the global capital markets. The basic structure is fairly straightforward and, in this respect, convertibles have remained unchanged since they were developed in the 19th century. They pay buyers below-market fixed income returns, while the attached warrants can be exchanged for equity at the holder’s option. They thus combine features of both equity and fixed income securities, and are often termed ‘hybrid securities’. These instruments present advantages and disadvantages to borrowers and buyers. The borrower can access capital at a lower coupon than would be the case if the lender were to issue plain vanilla debt. It is also essentially raising equity on a deferred basis. This means the dilution of shareholders is postponed. ### Recent trends The convertible makes a lot of sense for a certain class of borrowers, which is why it has been around for so long. However, the market tanked after the financial crisis, after which issuance dwindled to a trickle. Equity values tumbled and investors lost faith in the instrument. Both demand and supply dried up. The collapse of interest from hedge funds that specialised in convertible arbitrage was perhaps the killer factor. It is estimated that arbitrage funds constituted perhaps two-thirds of the market pre-crisis, and many were highly leveraged – as was not uncommon at the time. However, the market picked up dramatically with the onset of the Covid-19 pandemic, when the convertible became one of the only ways in which stressed organisations could raise desperately needed capital at an even partially acceptable cost. Issuance boomed in 2020 and continued to do so in 2021. At the end of November 2021, secondary market outstanding in the global convertible market was $509.5 billion, according to investment bank calculations. Issuance over the course of the year was$137 billion. There is now high inflation to contend with, and convertibles fare particularly well in an inflationary environment. Over the past two years, the convertible market has been dominated by new, high-growth borrowers with limited earning history. For these sorts of companies, inflation is particularly worrisome. According to one bank analyst: “High-growth names now constitute 50% of the market. This is an all-time high, and these are names for whom rising rates are toxic.” Strategists are calling for $100 billion of global issuance in 2022 and, while this is less than the past two years, it is well above the 2012–19 yearly average of around$80 billion. ### What is the right model to use? In theory, the basic structure of convertibles might not seem overwhelmingly difficult to value. However, in reality, they are, given the myriad additional features they contain. As a result of the conversion option, convertibles have three sources of risk: interest rate, credit and equity. To build a model for pricing convertibles, one has to first decide which sources of risk should be considered random. There are different possible combinations; however, all should include equity because its volatility has a much higher effect than that of the other two market factors. One- and two-factor approaches can be modelled with a numerical solver or a tree. To model all three factors as random, Monte Carlo simulation used to be the best solution. However, advances in computational efficiency open up the possibility of using a three-factor solver/tree as well. As technology continues to develop for pricing convertibles, one could consider using machine learning. ### The treatment of equity Equity in convertibles should be modelled as a lognormal process. To model equity forwards properly, it is not enough to merely input the spot and the repo rate of the equity. Other factors such as dividends and funding rate also need to be taken into account. The most straightforward method is to separately model the equity forward curve, which already has all the information and can project forward for any given time. One other important factor, which is relatively new in the analytics world, is applying no-default probability to the equity forward. By doing so, equity is considered under a no-default assumption because, if a default occurs, the equity goes to zero and the convertible option is worthless. ### Treatment of rates Interest rate models for convertibles are usually analogous to callable bonds. This could be a simple short-rate model, following either lognormal distribution such as Black–Karasinski, or normal such as Hull–White. Alternatively, one can select the forward-rate model, which can follow either normal or lognormal distributions. In this type of model, the volatilities and intracorrelations of forward rates can be calibrated to caps or swaptions. ### Treatment of credit Market best practice is to treat credit as a random factor, especially for high-yield bonds. For a random factor, its volatility and correlation with other factor(s) need to be defined. If a liquid credit default swap (CDS) option on bond issuer exists, one should use implied volatility. Alternative approaches are either proxying by indexes or using historical data. Correlation with equity should be calibrated to historicals as well; in general, it is expected to be negative and large in absolute value. The new approach of applying no-default probabilities to equity dynamics was mentioned previously. With this, the conversion option increases quickly and almost linearly as the credit spread grows. At the same time, the total convertible bond price, which is the sum of the conversion option and a bond floor, is close to flat when the credit spread is growing. This means the convertible bond provides protection against credit volatility. Figure 1 shows the dependency of both the conversion option and total price on a bond’s credit spread. In some cases, bootstrapping is not applicable and optimisation is necessary. This is particularly the case in scenarios where some input bonds are callable because, for callable bonds, the maturity is not easily defined. The credit curve should be either a separate input or implied from the market quote. The price of a convertible bond is a good indicator of the credit component. This means the option-adjusted or CDS spread can be implied from a market quote and hedged with a single-name CDS or a relative value trade. Quantifi’s white paper, Take advantage of relative value credit opportunities with advanced bond analytics explores credit relative value strategies in more detail, including the use of bond analytics to execute these strategies. As outlined, convertibles have several advanced call and put features designed to protect both bond issuers and investors. While modelling call and put simultaneously can seem complex, this is not the case. As long as the strike of the put is less than the strike of a call, the valuation of the bond follows a regular minimum–maximum relationship and is independent of the order of the two options. ### Conclusion The underlying characteristics of convertible bonds make for interesting and complex pricing and valuation dynamics. To make informed decisions and take advantage of investment opportunities requires sophisticated pricing and risk analytics. In today’s fast-paced environment, firms that realise maximum benefits are those with access to powerful modelling, analytical and pricing capabilities. Quantifi’s integrated pre- and post-trade solutions allow market participants to better value, trade and risk-manage their exposures, and respond more effectively to changing market conditions. By applying the latest technology innovations, Quantifi provides new levels of usability, flexibility and integration.
2022-12-05 04:12:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.495669424533844, "perplexity": 2260.369722956661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711003.56/warc/CC-MAIN-20221205032447-20221205062447-00806.warc.gz"}
https://www.geometrie-und-logik.de/geometrie/peano/peano-thg2-kurz/
### Einleitung in die theoremgruppe 2 der peano-geometrie Bedeutung der Theoreme Überblick über die Theoreme Theoreme zu den Axiomen I, II, III und IV Theorem 36 Spalte 2 Theorem 37 Theorem 38 Theorem 39 Theorem 40 Theorem 41 Theorem 42 Theorem 43 Theoreme zu den Axiomen V, VI und VII Theorem 44 Spalte 2 Theorem 45 Theorem 46 Theorem 47 Theorem 48 Theorem 49 Theorem 50 Theorem 51 Theorem 52 Theorem 53 Theorem 54 Theorem 55 Theorem 56 Theorem 57 Theorem 58 Theoreme zum Axiom VIII Theorem 59 Spalte 2 Theorem 60 Theorem 61 Theorem 62 Theorem 63 Theorem 64 Theorem 65 Theorem 66 Theorem 67 Theorem 68 Theorem 69 Theorem 70 Theorem 71 Theorem 72 Theorem 73 Theorem 74 Theorem 75 Theorem 76 Theorem 77 Theorem 78 Theorem 79 Theorem 80 Theorem 81 Theorem 82 Theorem 83 Theorem 84 Theorem 85 Theoreme zum Axiom IX Theorem 86 Spalte 2 Theorem 87 Theorem 88 Theorem 89 Theorem 90 Theorem 91 Theorem 92 Theorem 93 Theorem 94 Theorem 95 Theorem 96 Theorem 97 Theorem 98 Theorem 99 Theorem 100 Theorem 101 Theorem 102 Theorem 103 Theorem 104 Theorem 105 Theorem 106 Theorem 107 Theorem 108 Theorem 109 Theorem 110 Theorem 111 Theorem 112 Theorem 113 Theorem 114 Theorem 115 Theorem 116 Theorem 117 Theorem 118 Theorem 119 Theorem 120 Theorem 121 Theorem 122 Theorem 123 Theorem 124 Theorem 125 Theorem 126 Theorem 127 Theorem 128 Theorem 129 Theorem 130 Theorem 131 Theorem 132 Theoreme zum Axiom X Theorem 133 Spalte 2 Theorem 134 Theorem 135 Theorem 136 Theorem 137 Theorem 138 Theorem 139 Theorem 140 Theorem 141 Theorem 142 Theorem 143 Theorem 144 Theorem 145 Theorem 146 Theorem 147 Theorem 148 Theorem 149 Theorem 150 Theoreme zum Axiom XI Theorem 151 Spalte 2 Theorem 152 Theorem 153 Theorem 154 Theorem 155 Theorem 156 Theorem 157 Theorem 158 Theorem 159 Theorem 160 Theorem 161 Theorem 162 Theorem 163 Theorem 164 Theorem 165 Theorem 166 Theorem 167 Theorem 168 Theorem 169 Theorem 170 Theorem 171 Theoreme zum Axiom XII und XIII Theorem 172 Spalte 2 Theorem 173 Theorem 174 Theorem 175 Theorem 176 Theorem 177 Theorem 178 Theorem 179 Theorem 180 Theorem 181 Theorem 182 Theorem 183 Theorem 184 Theorem 185 Theorem 186 Theorem 187 Theorem 188 Theorem 189 Theorem 190 Theorem 191 Theorem 192 Theorem 193 Theorem 194 Theorem 195 Theorem 196 Theorem 197 Theorem 198 Theorem 199 Theorem 200 Theorem 201 Theorem 202 Theoreme zum Axiom XIV Theorem 203 Spalte 2 Theorem 204 Theorem 205 Theorem 206 Theorem 207 Theorem 208 Theorem 209 Theorem 210 Theorem 211 Theorem 212 Theorem 213 Theorem 214 Theorem 215 Theorem 216 Theorem 217 Theorem 218 Theorem 219 Theorem 220 Theorem 221 Theorem 222 Theorem 223 Theorem 224 Theorem 225 Theorem 226 Theorem 227 Theorem 228 Theorem 229 Theorem 230 Theorem 231 Theorem 232 Theorem 233 Theorem 234 Theorem 235 Theorem 236 Theorem 237 Theorem 238 Theorem 239 Theorem 240 Theorem 241 Theorem 242 Theoreme zu den Axiomen XV und XVI Theorem 243 Spalte 2 Theorem 244
2018-07-17 05:20:23
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9687888026237488, "perplexity": 5397.497286064534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589573.27/warc/CC-MAIN-20180717051134-20180717071134-00272.warc.gz"}
https://www.danieldsjoberg.com/ggsurvfit/articles/gallery.html
The gallery exhibits both default plots as well as the many modifications one can make. library(ggsurvfit) library(patchwork) ## Modifications with ggplot2 Let’s begin with showing the default plot and common modifications that are made with ggplot2 functions. • Expand axis to show percentages from 0% to 100% • Limit plot to show up to 8 years of follow-up • Add the percent sign to the y-axis label • Reduce padding in the plot area around the curves • Update color of the lines • Using the ggplot2 minimal theme gg_default <- survfit2(Surv(time, status) ~ surg, data = df_colon) %>% ggsurvfit() + labs(title = "Default") gg_styled <- gg_default + coord_cartesian(xlim = c(0, 8)) + scale_y_continuous( limits = c(0, 1), labels = scales::percent, expand = c(0.01, 0) ) + scale_x_continuous(breaks = 0:9, expand = c(0.02, 0)) + scale_color_manual(values = c('#54738E', '#82AC7C')) + scale_fill_manual(values = c('#54738E', '#82AC7C')) + theme_minimal() + theme(legend.position = "bottom") + guides(color = guide_legend(ncol = 1)) + labs(title = "Modified", y = "Percentage Survival") gg_default + gg_styled ## Risk Tables The default risk table styling is ready for publication. survfit2(Surv(time, status) ~ surg, data = df_colon) %>% ggsurvfit() + add_risktable() You can also group the risk table by the statistics rather than the stratum. Let’s also add additional time points where the statistics are reported and extend the y axis. ggrisktable <- survfit2(Surv(time, status) ~ surg, data = df_colon) %>% ggsurvfit() + scale_x_continuous(breaks = 0:9) + scale_y_continuous(limits = c(0, 1)) ggrisktable Use add_risktable_strata_symbol() to replace long stratum labels with a color symbol. The default symbol is a colored rectangle and you can change it to any UTF-8 symbol or text string. In the example below, we’ve updated the symbol to a circle. ggrisktable + add_risktable_strata_symbol(symbol = "\U25CF", size = 10) You can further customize the risk table using themes and the add_risktable(...) arguments. For example, use the following code to increase the font size of both the risk table text and the y-axis label. survfit2(Surv(time, status) ~ surg, data = df_colon) %>% ggsurvfit(size = 0.8) + risktable_height = 0.33, size = 4, # increase font size of risk table statistics theme = # increase font size of risk table title and y-axis label list( theme_risktable_default(axis.text.y.size = 11, plot.title.size = 11), theme(plot.title = element_text(face = "bold")) ) ) ## Quantiles and Censor Markings Add guidelines for survival quantiles and markings for censored patients using add_quantile() and add_censor_mark(). The add_quantile() function allows users to place guidelines by specifying either the y-intercept or x-intercept where the lines shall originate. survfit2(Surv(time, status) ~ surg, data = df_colon) %>% ggsurvfit(size = 0.8) + add_censor_mark(size = 2, alpha = 0.2) + add_quantile(y_value = 0.5, linetype = "dotted", color = "grey30", size = 0.8) + add_quantile(x_value = 5, color = "grey30", size = 0.8) ## Side-by-Side One of my favorite features of the {ggsurvfit} package is that any {ggplot2} function may be used to modify the plot and the risk tables will still align with the primary plot. This is accomplished by delaying the construction of the risk tables until the plot is printed. Because the the delayed build of the risk tables, we must take one additional step when placing figures side-by-side that contain risk tables: we must use the ggsurvfit_build() function. This function will construct the risk tables and combine them with the primary plot. Once the plots are built, the can be placed side-by-side with {patchwork}. p <- survfit2(Surv(time, status) ~ 1, df_colon) %>% ggsurvfit() + scale_y_continuous(limits = c(0, 1)) + scale_x_continuous(n.breaks = 7) # build plot (which constructs the risktable) built_p <- ggsurvfit_build(p) # place plots side-by-side wrap_plots(built_p, built_p, ncol = 2) NOTE: You can also use the patchwork arithmetic operators, but you must first wrap the plot in patchwork::wrap_elements(), e.g. wrap_elements(built_p) | wrap_elements(built_p). ## P-values Compare curves among the stratum using the add_p() function. P-values can be placed either in the figure caption (the default) or in the plot area as an annotation. p <- survfit2(Surv(time, status) ~ surg, data = df_colon) %>% ggsurvfit() # place p-value in caption p1 <- p + add_pvalue(caption = "Log-rank {p.value}") # place p-value as a plot annotation p2 <- p + add_pvalue(location = "annotation", x = 8.5) p1 + p2 + patchwork::plot_layout(guides = "collect") & theme(legend.position = "bottom") ## Scales Unlike most {ggplot2} functions, scales are not additive. This means that if a scale attribute is modified in one call to scale_x_continuous(), a second call to scale_x_continuous() will write over all changes made in the first. For this reason, the ggsurvfit() and ggcuminc() functions do not modify the default {ggplot2} scales; rather, all changes to the scales are left to the user. But, we do export a helpful ggsurvfit-specific scales function to help. The scale_ggsurvfit() function will apply default scales often seen in survival figures: reduced plot padding, y-axis labels appear as percentages, survival curves are shown from 0% to 100% on the y-axis. In the example below, we utilize scale_ggsurvfit() and make other scale changes to the scales, such as, specifying the breaks on the x-axis. survfit2(Surv(time, status) ~ surg, data = df_colon) %>% ggsurvfit(size = 1) + scale_ggsurvfit(x_scales = list(breaks = 0:9)) ## Transformations Show the probability of an event rather than the probability of being free from the event with transformations. Custom transformations are also available. p <- survfit2(Surv(time, status) ~ surg, data = df_colon) %>% ggsurvfit(type = "risk", size = 0.8) + p ## Saving Plots The {ggsurvfit} package plays well with ggplot2::ggsave(), which allows you to specify the output file format, the DPI, the height and width of a the image, and more. path_to_image <- file.path(tempdir(), "image.png") path_to_image #> [1] "C:\\Users\\SjobergD\\AppData\\Local\\Temp\\RtmpYHlYVC/image.png" ggsave(file = path_to_image, plot = p) #> Saving 7.29 x 4.51 in image ## Extensions Because {ggsurvfit} functions are written as proper {ggplot2} geoms, you can both weave any {ggplot2} functions and ggplot2 extensions, such as {gghighlight} and {ggeasy}. survfit2(Surv(time, status) ~ rx, data = df_colon) %>% ggsurvfit() + gghighlight::gghighlight( strata == "Levamisole+5-FU", calculate_per_facet = TRUE ) + ggeasy::easy_remove_legend() + ggeasy::easy_y_axis_labels_size(size = 15) + ggeasy::easy_x_axis_labels_size(size = 15) ## Faceting Curves created with {ggsurvfit} can also later be faceted using {ggplot2}. Note, however, that faceted curves cannot include a risk table. The ggsurvfit() function calls tidy_survfit() to create the data frame that is used to create the figure. In the data frame, there is a column named "strata", which we will facet over. survfit2(Surv(time, status) ~ surg, data = df_colon) %>% ggsurvfit() + facet_wrap(~strata, nrow = 1) + theme(legend.position = "none") + scale_x_continuous(n.breaks = 6) + labs(title = "PFS by Duration between Surgery and Treatment") ## Grey-scale Figures You may need a black and white figure and that is achieved using grey-scale ggplot2 functions. survfit2(Surv(time, status) ~ surg, data = df_colon) %>% ggsurvfit() + scale_color_grey() + scale_fill_grey() + labs(title = "Grey Scale") ## KMunicate To get figures that align with the guidelines outlined in “Proposals on Kaplan–Meier plots in medical research and a survey of stakeholder views: KMunicate.”, use the theme_ggsurvfit_KMunicate() theme along with these function options. survfit2(Surv(time, status) ~ surg, data = df_colon) %>% ggsurvfit(linetype_aes = TRUE) + theme(legend.position = c(0.85, 0.85))
2022-12-02 00:21:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45990198850631714, "perplexity": 11133.64758839318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710870.69/warc/CC-MAIN-20221201221914-20221202011914-00715.warc.gz"}
http://www.computer.org/csdl/trans/tc/1971/06/01671921-abs.html
Subscribe Issue No.06 - June (1971 vol.20) pp: 688-690 C. Harlow , IEEE ABSTRACT Hartmanis and Stearns defined the concept of an inessential error in their study of errors in sequential machines and represented such errors by means of an error partition ? E. Although they showed that ?E could not be determined using only the usual partition pair algebras, they did not provide a means by which it could be determined. The purpose of this note is to develop an algorithm for the determination of ?E for a given machine. INDEX TERMS Error partition, inessential error, ?E, sequential machines. CITATION C. Harlow, C.L. Coates, "Inessential Errors in Sequential Machines", IEEE Transactions on Computers, vol.20, no. 6, pp. 688-690, June 1971, doi:10.1109/T-C.1971.223328
2015-10-05 01:29:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8159915208816528, "perplexity": 2089.7120829778746}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676547.12/warc/CC-MAIN-20151001215756-00219-ip-10-137-6-227.ec2.internal.warc.gz"}
https://planetmath.org/elementarymatrix
# elementary matrix ## Elementary Operations on Matrices Let $\mathbb{M}$ be the set of all $m\times n$ matrices (over some commutative ring $R$). An operation on $\mathbb{M}$ is called an elementary row operation if it takes a matrix $M\in\mathbb{M}$, and does one of the following: 1. 1. interchanges of two rows of $M$, 2. 2. multiply a row of $M$ by a non-zero element of $R$, 3. 3. add a (constant) multiple of a row of $M$ to another row of $M$. An elementary column operation is defined similarly. An operation on $\mathbb{M}$ is an elementary operation if it is either an elementary row operation or elementary column operation. For example, if $M=\begin{pmatrix}a&b\\ c&d\\ e&f\end{pmatrix}$, then the following operations correspond respectively to the three types of elementary row operations described above 1. 1. $\begin{pmatrix}a&b\\ e&f\\ c&d\end{pmatrix}$ is obtained by interchanging rows 2 and 3 of $M$, 2. 2. $\begin{pmatrix}a&b\\ rc&rd\\ e&f\end{pmatrix}$ is obtained by multiplying $r\neq 0$ to the second row of $M$, 3. 3. $\begin{pmatrix}a&b\\ c&d\\ sa+e&sb+f\end{pmatrix}$ is obtained by adding to row 1 multiplied by $s$ to row 3 of $M$. Some immediate observation: elementary operations of type 1 and 3 are always invertible. The inverse of type 1 elementary operation is itself, as interchanging of rows twice gets you back the original matrix. The inverse of type 3 elementary operation is to add the negative of the multiple of the first row to the second row, thus returning the second row back to its original form. Type 2 is invertible provided that the multiplier has an inverse. Some notation: for each type $k$ (where $k=1,2,3$) of elementary operations, let $E_{c}^{k}(A)$ be the set of all matrices obtained from $A$ via an elementary column operation of type $k$, and $E_{r}^{k}(A)$ the set of all matrices obtained from $A$ via an elementary row operation of type $k$. ## Elementary Matrices Now, assume $R$ has $1$. An $n\times n$ elementary matrix is a (square) matrix obtained from the identity matrix $I_{n}$ by performing an elementary operation. As a result, we have three types of elementary matrices, each corresponding to a type of elementary operations: 1. 1. transposition matrix $T_{ij}$: an matrix obtained from $I_{n}$ with rows $i$ and $j$ switched, 2. 2. basic diagonal matrix $D_{i}(r)$: a diagonal matrix whose entries are $1$ except in cell $(i,i)$, whose entry is a non-zero element $r$ of $R$ 3. 3. row replacement matrix $E_{ij}(s)$: $I_{n}+sU_{ij}$, where $s\in R$ and $U_{ij}$ is a matrix unit with $i\neq j$. For example, among the $3\times 3$ matrices, we have $T_{12}=\begin{pmatrix}0&1&0\\ 1&0&0\\ 0&0&1\end{pmatrix},\quad D_{3}(r)=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&r\end{pmatrix},\quad\mbox{and}\quad E_{32}(s)=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&s&1\end{pmatrix}$ For each positive integer $n$, let $\mathbb{E}^{k}(n)$ be the collection of all $n\times n$ elementary matrices of type $k$, where $k=1,2,3$. Below are some basic properties of elementary matrices: • $T_{ij}=T_{ji}$, and $T_{ij}^{2}=I_{n}$. • $D_{i}(r)D_{i}(r^{-1})=I_{n}$, provided that $r^{-1}$ exists. • $E_{ij}(s)E_{ij}(-s)=I_{n}$. • $\det(T_{ij})=-1$, $\det(D_{i}(r))=r$, and $\det(E_{ij}(s))=1$. • If $A$ is an $m\times n$ matrix, then $E_{c}^{k}(A)=\{AE\mid E\in\mathbb{E}^{k}(n)\}\qquad\mbox{and}\qquad E_{r}^{k}(% A)=\{EA\mid E\in\mathbb{E}^{k}(m)\}.$ • Every non-singular matrix can be written as a product of elementary matrices. This is the same as saying: given a non-singular matrix, one can perform a finite number of elementary row (column) operations on it to obtain the identity matrix. Remarks. • One can also define elementary matrix operations on matrices over general rings. However, care must be taken to consider left scalar multiplications and right scalar multiplications as separate operations. • The discussion above pertains to elementary linear algebra. In algebraic K-theory, an elementary matrix is defined only as a row replacement matrix (type 3) above. Title elementary matrix Canonical name ElementaryMatrix Date of creation 2013-03-22 18:30:38 Last modified on 2013-03-22 18:30:38 Owner CWoo (3771) Last modified by CWoo (3771) Numerical id 13 Author CWoo (3771) Entry type Definition Classification msc 15-01 Related topic MatrixUnit Related topic GaussianElimination Defines elementary operation Defines elementary column operation Defines elementary row operation Defines basic diagonal matrix Defines transposition matrix Defines row replacement matrix
2020-04-09 11:52:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 64, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9202055335044861, "perplexity": 298.51848830141415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371833063.93/warc/CC-MAIN-20200409091317-20200409121817-00454.warc.gz"}
https://www.investopedia.com/terms/p/present-value-annuity.asp
• General • Personal Finance • Reviews & Ratings • Wealth Management • Popular Courses • Courses by Topic Present Value of an Annuity: Meaning, Formula, and Example What Is the Present Value of an Annuity? The present value of an annuity is the current value of future payments from an annuity, given a specified rate of return, or discount rate. The higher the discount rate, the lower the present value of the annuity. Present value (PV) is an important calculation that relies on the concept of the time value of money, whereby a dollar today is relatively more "valuable" in terms of its purchasing power than a dollar in the future. Key Takeaways • The present value of an annuity refers to how much money would be needed today to fund a series of future annuity payments. • Because of the time value of money, a sum of money received today is worth more than the same sum at a future date. • You can use a present value calculation to determine whether you'll receive more money by taking a lump sum now or an annuity spread out over a number of years. 1:08 Understanding the Present Value of an Annuity An annuity is a financial product that provides a stream of payments to an individual over a period of time, typically in the form of regular installments. Annuities can be either immediate or deferred, depending on when the payments begin. Immediate annuities start paying out right away, while deferred annuities have a delay before payments begin. Because of the time value of money, money received today is worth more than the same amount of money in the future because it can be invested in the meantime. By the same logic, $5,000 received today is worth more than the same amount spread over five annual installments of$1,000 each. Present value is an important concept for annuities because it allows individuals to compare the value of receiving a series of payments in the future to the value of receiving a lump sum payment today. By calculating the present value of an annuity, individuals can determine whether it is more beneficial for them to receive a lump sum payment or to receive an annuity spread out over a number of years. This can be particularly important when making financial decisions, such as whether to take a lump sum payment from a pension plan or to receive a series of payments from an annuity. Present value calculations can also be used to compare the relative value of different annuity options, such as annuities with different payment amounts or different payment schedules. Present Value and the Discount Rate The discount rate is a key factor in calculating the present value of an annuity. The discount rate is an assumed rate of return or interest rate that is used to determine the present value of future payments. The discount rate reflects the time value of money, which means that a dollar today is worth more than a dollar in the future because it can be invested and potentially earn a return. The higher the discount rate, the lower the present value of the annuity, because the future payments are discounted more heavily. Conversely, a lower discount rate results in a higher present value for the annuity, because the future payments are discounted less heavily. In general, the discount rate used to calculate the present value of an annuity should reflect the individual's opportunity cost of capital, or the return they could expect to earn by investing in other financial instruments. For example, if an individual could earn a 5% return by investing in a high-quality corporate bond, they might use a 5% discount rate when calculating the present value of an annuity. The smallest discount rate used in these calculations is the risk-free rate of return. U.S. Treasury bonds are generally considered to be the closest thing to a risk-free investment, so their return is often used for this purpose. It's important to note that the discount rate used in the present value calculation is not the same as the interest rate that may be applied to the payments in the annuity. The discount rate reflects the time value of money, while the interest rate applied to the annuity payments reflects the cost of borrowing or the return earned on the investment. The opposite of present value is future value (FV). The FV of money is also calculated using a discount rate, but extends into the future. Formula and Calculation of the Present Value of an Annuity The formula for the present value of an ordinary annuity, is below. An ordinary annuity pays interest at the end of a particular period, rather than at the beginning: \begin{aligned} &\text{P} = \text{PMT} \times \frac { 1 - \Big ( \frac { 1 }{ ( 1 + r ) ^ n } \Big ) }{ r } \\ &\textbf{where:} \\ &\text{P} = \text{Present value of an annuity stream} \\ &\text{PMT} = \text{Dollar amount of each annuity payment} \\ &r = \text{Interest rate (also known as discount rate)} \\ &n = \text{Number of periods in which payments will be made} \\ \end{aligned} Example of the Present Value of an Annuity Assume a person has the opportunity to receive an ordinary annuity that pays $50,000 per year for the next 25 years, with a 6% discount rate, or take a$650,000 lump-sum payment. Which is the better option? Using the above formula, the present value of the annuity is: \begin{aligned} \text{Present value} &= \50,000 \times \frac { 1 - \Big ( \frac { 1 }{ ( 1 + 0.06 ) ^ {25} } \Big ) }{ 0.06 } \\ &= \639,168 \\ \end{aligned} Why Is Future Value (FV) Important to investors? Future value (FV) is the value of a current asset at a future date based on an assumed rate of growth. It is important to investors as they can use it to estimate how much an investment made today will be worth in the future. This would aid them in making sound investment decisions based on their anticipated needs. However, external economic factors, such as inflation, can adversely affect the future value of the asset by eroding its value. How Does Ordinary Annuity Differ From Annuity Due? An ordinary annuity is a series of equal payments made at the end of consecutive periods over a fixed length of time. An example of an ordinary annuity includes loans, such as mortgages. The payment for an annuity due is made at the beginning of each period. A common example of an annuity due payment is rent. This variance in when the payments are made results in different present and future value calculations. What Is the Formula for the Present Value of an Ordinary Annuity? The formula for the present value of an ordinary annuity is: \begin{aligned} &\text{P} = \text{PMT} \times \frac { 1 - \Big ( \frac { 1 }{ ( 1 + r ) ^ n } \Big ) }{ r } \\ &\textbf{where:} \\ &\text{P} = \text{Present value of an annuity stream} \\ &\text{PMT} = \text{Dollar amount of each annuity payment} \\ &r = \text{Interest rate (also known as discount rate)} \\ &n = \text{Number of periods in which payments will be made} \\ \end{aligned} What Is the Formula for the Present Value of an Annuity Due? With an annuity due, in which payments are made at the beginning of each period, the formula is slightly different than that of an ordinary annuity. To find the value of an annuity due, simply multiply the above formula by a factor of (1 + r): \begin{aligned} &\text{P} = \text{PMT} \times \frac { 1 - \Big ( \frac { 1 }{ ( 1 + r ) ^ n } \Big ) }{ r } \times ( 1 + r ) \\ \end{aligned} The Bottom Line The present value (PV) of an annuity is the current value of future payments from an annuity, given a specified rate of return or discount rate. It is calculated using a formula that takes into account the time value of money and the discount rate, which is an assumed rate of return or interest rate over the same duration as the payments. The present value of an annuity can be used to determine whether it is more beneficial to receive a lump sum payment or an annuity spread out over a number of years. Article Sources Investopedia requires writers to use primary sources to support their work. These include white papers, government data, original reporting, and interviews with industry experts. We also reference original research from other reputable publishers where appropriate. You can learn more about the standards we follow in producing accurate, unbiased content in our editorial policy. 1. George Brown College. "Formula Sheet for Financial Mathematics." 2. Wai-sum Chan and Yiu-kuen Tse. “Financial Mathematics for Actuaries (Third Edition),” Pages 40-43. World Scientific Publishing Company, 2021. 3. All Finance Journal. "Time Value of Money: A Case Study on Its Concept and Its Application in Real Life Problems," Pages 18-23. Take the Next Step to Invest × The offers that appear in this table are from partnerships from which Investopedia receives compensation. This compensation may impact how and where listings appear. Investopedia does not include all offers available in the marketplace. Service Name Description Take the Next Step to Invest × The offers that appear in this table are from partnerships from which Investopedia receives compensation. This compensation may impact how and where listings appear. Investopedia does not include all offers available in the marketplace.
2023-01-30 11:26:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9671798348426819, "perplexity": 748.746959850444}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499816.79/warc/CC-MAIN-20230130101912-20230130131912-00765.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/trigonometric-functions-general-solution-of-trigonometric-equation-of-the-type-find-general-solution-equation-sin-2x-sin-4x-sin-6x-0_10974
HSC Arts 12th Board ExamMaharashtra State Board Share # Find the general solution of the equation sin 2x + sin 4x + sin 6x = 0 - HSC Arts 12th Board Exam - Mathematics and Statistics ConceptTrigonometric Functions General Solution of Trigonometric Equation of the Type #### Question Find the general solution of the equation sin 2x + sin 4x + sin 6x = 0 #### Solution (sin 2x +sin 6x)+sin 4x =0 2sin4x.cos2x+sin4x=0 sin4x(2cos2x+1)=0 sin 4x=0 or 2cos2x+1=0 sin4x=0 or cos2x=-1/2=-cospi/3=cos(pi-pi/3) Using sinx=0=>x=npi sin4x=0 4x=npi The genral solution is x x=(npi)/4 using cosx=cosalpha=>x=2mx+-alpha cos2x=cos((2pi)/3) 2x=2mpi+-(2pi)/3 The genral solution is x x=mpi+-pi/3 where m,n in z Is there an error in this question or solution? #### APPEARS IN 2016-2017 (March) (with solutions) Question 2.1.3 | 3.00 marks 2016-2017 (July) (with solutions) Question 3.1.2 | 3.00 marks Solution Find the general solution of the equation sin 2x + sin 4x + sin 6x = 0 Concept: Trigonometric Functions - General Solution of Trigonometric Equation of the Type. S
2019-12-09 21:40:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.729389488697052, "perplexity": 6863.164533284711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540523790.58/warc/CC-MAIN-20191209201914-20191209225914-00382.warc.gz"}
http://www.ijpe-online.com/EN/10.23940/ijpe.21.09.p3.766778
Int J Performability Eng ›› 2021, Vol. 17 ›› Issue (9): 766-778. ### Preventive Maintenance Optimization Regarding Large-Scale Systems based on the Life-Cycle Cost Ruiqi Wang, Guangyu Chen*, Na Liang, Zheng Huang 1. School of Management and Economics, University of Electronic Science and Technology of China (UESTC), Chengdu, 610000, China • Submitted on ; Revised on ; Accepted on • Contact: * E-mail address: chenguangyu@uestc.edu.cn Abstract: Unit degradation complicates the comprehensive optimization of reliability design and preventive maintenance (PM) policies for large-scale systems considering the life-cycle. Based on the unit failure rate obeying the Weibull distribution, we propose a cost optimization model for large-scale systems under reliability constraints from the life-cycle perspective. In this study, we consider a simple multi-unit preventive joint maintenance policy where units are assessed and fixed only during planned inspections. However, nonlinear optimization becomes increasingly difficult due to the exponential combination growth caused by numerous units. To overcome this challenge, a genetic algorithm (GA) program is adopted to obtain the global optimal solution covering the unit reliability in the design and manufacturing stages and system PM period in the operation stage. Through real-world example analysis, the correctness and effectiveness of the proposed model and algorithm are verified. The relationship among decision variables, such as maintenance improvement factor, unit reliability, and PM period, is examined. The results simplify the reliability design process for system engineers and enrich reliability theory and applications.
2022-08-10 02:12:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22794747352600098, "perplexity": 2706.6239783732212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00060.warc.gz"}
https://www.semanticscholar.org/paper/A-Lyapunov-Like-Characterization-of-Asymptotic-Sontag/3549c27e3ee70a1d90c69f8f038458de71bd6a85
# A Lyapunov-Like Characterization of Asymptotic Controllability @article{Sontag1983ALC, title={A Lyapunov-Like Characterization of Asymptotic Controllability}, author={Eduardo Sontag}, journal={SIAM Journal on Control and Optimization}, year={1983}, volume={21}, pages={462-471} } • Eduardo Sontag • Published 1 May 1983 • Mathematics • SIAM Journal on Control and Optimization It is shown that a control system in ${\bf R}^n$ is asymptotically controllable to the origin if and only if there exists a positive definite continuous functional of the states whose derivative can be made negative by appropriate choices of controls. 552 Citations • Mathematics • 1996 It is shown that every asymptotically controllable system can be stabilized by means of some (discontinuous) feedback law. One of the contributions of the paper is in defining precisely the meaning • Mathematics • 1999 This paper shows that, for time varying systems, global asymptotic controllability to a given closed subset of the state space is equivalent to the existence of a continuous control-Lyapunov function • Mathematics 42nd IEEE International Conference on Decision and Control (IEEE Cat. No.03CH37475) • 2003 We demonstrate the existence of a smooth control-Lyapunov function (CLF) for difference equations asymptotically controllable to closed sets. This follows from a more general result on the existence The main result of this note is an external stabilizability theorem for discontinuous systems affine in the control (with solutions intended in the Filippov’s sense). In order to get it we first A general notion of global asymptotic controllability to a given equilibrium of a time-varying system is introduced and is shown that this property is equivalent to the existence of a lower • Mathematics • 1995 We deal with the question of obtaining explicit feedback control laws that stabilize a nonlinear system, under the assumption that a \control Lyapunov function" is known. In previous work, the case • Mathematics IEEE Trans. Autom. Control. • 1997 It is shown that every asymptotically controllable system can be globally stabilized by means of some (discontinuous) feedback law. The stabilizing strategy is based on pointwise optimization of a • Mathematics • 1999 | It is shown that every asymptotically controllable system can be globally stabilized by means of some (discontinuous) feedback law. The stabilizing strategy is based on pointwise optimization of a ## References SHOWING 1-10 OF 30 REFERENCES The question of whether a set is reachable by a nonlinear control system is answered in terms of the properties of a convex optimization problem. The set is reachable or not according to whether the • Mathematics 1980 19th IEEE Conference on Decision and Control including the Symposium on Adaptive Processes • 1980 We show that, in general, it is impossible to stabilize a controllable system by means of a continuous feedback, even if memory is allowed. No optimality considerations are involved. All state spaces Text on stability of control system as developed from direct method of Liapunov, noting V. M. Popov contribution The relationship between relaxed controls and the family of processes or flows generated by ordinary controls is studied. We find that the flows generated by the relaxed controls form a completion of This monograph is intended for use in a one-semester graduate course or advanced undergraduate course and contains the principles of general control theory and proofs of the maximum principle and basic existence theorems of optimal control theory. • Mathematics • 1977 I. Elements of Stability Theory.- 1. A First Glance at Stability Concepts.- 2. Various Definitions of Stability and Attractivity.- 3. Auxiliary Functions.- 4. Stability and Partial Stability.- 5. • Mathematics IEEE Transactions on Systems, Man, and Cybernetics • 1976
2023-02-05 18:40:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.719070553779602, "perplexity": 1043.4038403171014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00531.warc.gz"}
https://lardbucket.org/blog/archives/2014/11/30/2012-book-archive-now-with-pdfs/
# lardbucket: 2012 book archive: now with pdfs ## 11/30/2014 ### 2012 Book Archive: Now With PDFs Filed under: General — Andy @ 10:58 am Almost two years ago, I launched (what became) my 2012 book archive. There’s a bit of background on that project on that page. While there have been a few minor developments since then, none of them have been noteworthy enough to document here. Recently, however, I decided that I would try to make PDF copies of the books. I wanted these to be good-quality PDFs, as although a number of PDFs of the books had circulated from other groups in the past, they weren’t particularly visually appealing: they were essentially what you would get by quickly printing each of the HTML files from the publisher. My guess was that a simple HTML file with all of the book content could be combined with a quick print CSS style and get good-quality output without too much effort. I figured that with a decent number of iterations on the book CSS and test runs, the project might take a week or so. Over two months later, I’m finally set to release the PDFs. I’ve made a few notes of the things that I did along the way, in case anybody finds them useful in the future. If you just want the PDFs, you can visit the 2012 book archive and download them there. There are a few ways to download each book: you can download a whole-book PDF file, or a PDF for each chapter. Both are accessible from a book’s table of contents, and the whole-book PDFs can be downloaded from the archive’s book list as well. ## Getting a PDF It’s worth noting that the content I have from the books is already in an HTML format, not necessarily the raw input to a book creator. I decided early on to try to work with HTML and CSS, because although the HTML I have is structured and could probably be parsed into a different format, doing that correctly for over a hundred books would be iffy. With that said, the HTML was well-structured, and lent itself to easy use of CSS to style specific things. If you’ve been following CSS development, there are a lot of features in CSS that are supposed to allow styling of printed content. Unfortunately, support for some of these in browsers is spotty, and they don’t necessarily provide all of the features that you’d like when styling a book. (In particular, footnotes, page-relative positioning, page numbering, and PDF bookmarks are all difficult. There are likely other features I’ve forgotten about as well.) At first, I was hoping to be able to use something like wkhtmltopdf, which wraps WebKit and converts an input file to a PDF. This gave me a number of problems, and didn’t seem to support the concept of pages as natively as would be desirable. (It’s still impressive that they managed to get the project to work as well as it does, but it doesn’t produce nice-looking books yet.) After that, I decided that perhaps Firefox’s support for printing would work for me: it works pretty well with many of the CSS printing features, and I can probably script Firefox to output PDFs if I want. Unfortunately, I again ran in to bugs with the rendering. I don’t recall the details at the moment, but I believe that content had a tendency to not wrap between pages in the right spots. Either way, this sent me in search of a high-quality PDF rendering solution. If you look online for advice about generating PDFs from HTML, you will inevitably run upon many people suggesting PrinceXML (or, as it seems to be rebranding itself, Prince). They’re probably right. It is a commercial piece of software in a case where I had hoped to use free software, but it is still the best solution I have found by far, both in ease of use and functionality. ## Princely Things Prince itself is not cheap. A personal license is $495 at the time of writing, and even that may not cover what I intend to do in terms of converting books automatically. (To be clear, it might be covered, but only just barely if it is. I haven’t asked, for reasons I’ll explain shortly) If you are doing anything serious with Prince, you’re probably looking at a$3800 license per server generating PDFs, or a $1900 one if you’re only doing academic things. Upgrades are available for an additional annual cost. To be clear, if you’re generating revenue from the PDFs (or even just saving yourself loads of time), Prince is almost certainly worth every penny, but it’s prohibitive for side projects. For non-commercial projects, Prince offers a free version with the requirement that you allow it to add a logo and link to the corner of your document’s first page, link to their website wherever you have Prince PDFs for download, and link to their website on a sponsors/partners page. This is mostly unintrusive (although a tad confusing at first: I’ve considered trying to style in a little “Made with” above the logo to explain why it’s there), and very nice of Prince to allow. (To get the “Non-commercial” license, just download the software: you don’t need a special license key or anything.) In fact, I had a question about their licensing (“The books are licensed under a Creative Commons license that doesn’t allow me to add restrictions to them, so is it required for people who receive the PDFs from me to keep the Prince logo on them? If so, I can’t use the noncommercial license.”), emailed them, and got an email back quite quickly from Håkon Wium Lie, Prince’s Director (not to mention CTO at Opera and founding member of the Pirate Party of Norway). He’s definitely on top of things, and was quite happy to help. (The answer is no, other people can do whatever they want to the PDFs. In my case, they’re still subject to the Creative Commons license they always were, but that’s not because of Prince.) Later, I had a question about how to get something to render correctly (a somewhat minor, obscure layout bug), and quickly received a comment from Mike Day, the CEO, noting that they were looking into the issue. When I followed up, the bug hadn’t yet been fixed (it undoubtedly has tricky interactions with their page layout code), but I quickly received an alternative suggestion complete with example code. Definitely a pleasant experience all the way around. If you’re looking for a cheaper option to start with, you’ll probably run into DocRaptor as well. DocRaptor started out as Prince-as-a-service, providing an API to allow people to generate PDFs using Prince. It now appears to support Excel files, although I haven’t looked in to those features. For many people, the benefits of being able to rely on DocRaptor to scale up as your workloads do (they claim “thousands of documents a second”) and the lower initial costs are probably a great benefit. They also provide well-supported libraries for a number of languages, where Prince usage is largely done by command line (although Prince has a PHP API as well). Overall, DocRaptor almost certainly provides benefits for many people. However, their plans aren’t super cheap either, and they’re targeted at recurring use, not one-shot uses like mine. I generated over 2500 PDFs in my final output (one per book, plus one per chapter), which would probably have cost me$149 in a month, assuming I didn’t want to tweak them later. Still far cheaper than the cheapest Prince license, but pricey for a personal side project like mine. DocRaptor does have a 7-day free trial, which probably would have allowed me to generate whatever I wanted during that time, but that’s not exactly ideal, either. (Nor do I mind paying something for the service, but over a hundred dollars a shot is high for my purposes.) I emailed the DocRaptor folks about a pay-as-you-go plan (so I wasn’t paying monthly fees when I wasn’t using the service), because I had found references to such a plan elsewhere. I got a very nice response from Matt Gordon, the “lead vocalist” for the group running DocRaptor. Unfortunately, they no longer offer that plan, because they found that disproportionately more of their support costs (and they do provide good support) were going to users who didn’t spend much on the service anyway. We had a nice conversation about the possibility of plans that might support alternative uses such as mine, but it doesn’t sound like there’s anything planned in the immediate future. (I can’t blame them, as they need to make money and do what makes sense for their business to continue existing.) They did make a very nice offer (I won’t disclose the details) that I turned down for unrelated reasons, but they’re definitely nice folks too. My conclusion is that you pretty much can’t go wrong with Prince or DocRaptor. Both have very nice and responsive folks behind them, and seem to be quite well done. ## Tables of Contents and Bookmarks One of the things relatively unique to printed books is cross-references with page numbers. Most of the book content doesn’t include these. This is primarily because any existing cross references are links to a specific section, and I didn’t think it necessary to include a page number along with the section number. However, the table of contents for the book definitely benefits from page numbers. Pulling a table of contents together in Prince is relatively easy. It could possibly be done automatically with JavaScript, but I chose to create taables of contents in a Ruby preprocessor as I was assembling whole-book files anyway. Prince makes it easy to include page numbers for links to given anchors, so I only needed to pull out the anchor for each section. (Luckily for me, I already had the anchors in a database.) Secondly, I wanted to make sure that chapters and sections were listed in the PDF list of bookmarks. This list is sometimes useful when navigating a book in a PDF viewer, although some viewers don’t show it. Prince again makes this quite easy, simply requiring a CSS annotation for the items you wish to be bookmark headings. (In fact, by default it uses h1-h6 tags, but I disabled that default because it picked up way too many bookmarks.) ## Optimization In creating the full-book files, I noticed that some books created particularly large files. In general, this appeared to be because they embedded the full source images, rather than resampling them. While an option to resample the images inside Prince would be great, it doesn’t exist at this time. Some of the source images were quite large, and clearly intended to be printed at >= 300 dpi, while most users of the PDFs wouldn’t benefit from such images. My first attempt at reducing file size was to use Ghostscript to resample the images. Ghostscript has some features that work similarly to the now-unavailable Acrobat Distiller, and seemed likely to do the job. Unfortunately, after getting Ghostscript working (Ubuntu 14.04’s version appears to crash on larger documents, but 14.10’s works), I found that it removed page numbering information and bookmarks. The next step was to try to export this metadata using PDFtk before using Ghostscript, and then import it again afterward. Unfortunately, while PDFtk will output page numbering details, it won’t import them into a PDF, and there doesn’t appear to be any easily-available way to do so. So, I temporarily abandoned the option to resample the images using Ghostscript. (It also may or may not have been worth it in the first place: some Ghostscript-generated files were larger than the Prince originals, so I had to handle both cases.) It may be worth patching Ghostscript in the future to keep the metadata around, but that seems likely to be quite involved. In many cases, you may get some benefit out of using Ghostscript with appropriate options (“gs -q -sDEVICE=pdfwrite -dPDFSETTINGS=/printer -dBATCH -dNOPAUSE -sOutputFile=[outfile] [inputfile]” seems to work well besides the additional metadata), but it was unfortunately unsuitable for my purposes at this time. ## Chapter Files Following up on the “whole book PDFs can be pretty big” issue, and after trying to open some such books and experiencing slow loading times, I decided that it may be appropriate to create one PDF per chapter as well as the whole-book PDF. My first pass at creating these PDFs was to use PDFtk to pull out just the pages from a given chapter. This posed a few problems: first, I had to figure out which pages belonged to which chapter. Luckily, the bookmarks inserted by Prince, combined with PDFtk’s metadata output, gave me the starting page for each chapter (although for a few minor reasons, this link was a bit iffy: the generated bookmark title did not always match the section name I had in my database), and I could assume that a chapter ended just before the next one began. Unfortunately, this ran into the same problem I had before: I would lose the page numbering and bookmarks. (Not to mention the fact that I would need to separately render a new first page to describe the licensing and get the Prince logo back on the first page.) Finally, I decided to simply depend on Prince once again. I got Prince to log the page number and ID for each chapter heading by using a ::before psuedo-class with a content property of “prince-script(log, counter(page), attr(id))”, and a small “log” function in the JavaScript on each page. This allowed me to use the IDs to match up with my database, and easily identify where each chapter started. Because I already had whole-chapter HTML files, I could then use those HTML files to render the chapter in Prince, and everything would still be in sync, without having to try to render and merge together separate front pages for each chapter. (I still needed to get the page numbers to Prince for rendering purposes, but for this, I simply placed the page number in a CSS block in the HTML file.) This solution appears to have worked surprisingly well, with the page numbers matching up where expected. Because the files were rendered separately, there is the possibility of some unforeseen issue (I certainly didn’t inspect the thousands of files by hand), but it seems unlikely. ## Math Finally, when reviewing one of the math textbooks in the collection, I noticed that Prince’s MathML rendering wasn’t particularly great. It is definitely better than nothing, but the rendering quality did leave something to be desired. Unfortunately, the most common web-based solution here, MathJax, doesn’t work very well with Prince. (This is a noted todo item on Prince’s release notes, but it’s not available yet.) After stumbling through a number of other options to try, I ended up using PhantomJS together with MathJax to prerender the math to MathJax’s “HTML-CSS” output (the SVG output didn’t look very good and produced a very large PDF file after the required fixes to make Prince display the SVG output). I forced MathJax to use the STIX fonts (which I installed on my computer), and after the math was rendered, I output the document’s HTML form again (after removing the MathJax wrapper divs). This produced files with reasonably good-looking math, the way they were intended to look. The prerendering code hasn’t been published yet because I haven’t taken the time to clean it up, but if someone is interested, I can definitely post it. Prerendering with MathJax is a step that seems to have very poor asymptotic time complexity. I haven’t formally benchmarked it, but a chapter’s sections took about two minutes to prerender in total, while the whole chapter itself took roughly twelve minutes to prerender. The whole book took roughly four days to prerender. It’s not clear why this occurred, but the prerendering did eventually succeed. It’s also not clear if this is a bug in MathJax, or simply some inefficiency in PhantomJS, so I have yet to report it as a bug to either project (and may never report it – it’s unlikely to come up in common use). ## Fin So, to summarize, getting PDFs of a quality I’m comfortable with took quite a bit of effort. In the end, Prince does most of the work, and I rarely had problems with Prince itself. I think it was worth the effort, at least for a personal learning experience. Hopefully the books will be useful to other people as well. Once again, they’re all available at http://2012books.lardbucket.org. Please feel free to copy or redistribute them as you see fit, pursuant to the terms of the associated Creative Commons by-nc-sa license. Andy Schmitz P.S. If you’re interested in any of the print-specific (or Prince-specific) things I did to make the books look decent when printed, it’s all left in the book’s CSS file toward the bottom, under the “prince” @media type. Feel free to reuse any of that styling for any purpose you see fit, in any situation. I do not believe it is covered under the Creative Commons license: you may consider it to be public domain. ## 17 Comments » 1. Hi there Andy. Can you advise on who to contact for permission to use material from one of the books in the 2012books archive for commercial use? Thanks! Comment by Ilka — 12/11/2014 @ 7:37 am 2. Ilka: Unfortunately, I don’t have the ability to license the books for commercial use, as I am not the original author (or publisher). The publisher has asked to remain anonymous, but I will contact you privately to see if it is okay to pass along your information to them and have them contact you. Comment by Andy — 12/11/2014 @ 9:45 pm 3. Dear Andy, do you know how to access business cases videos “How Would You Handle This?” from Beginning Management of Human Resources book? The videos ask for password (for ex. p.30). Thank you. Comment by Tetiana — 1/8/2015 @ 8:18 am 4. Tetiana: Unfortunately, it looks like those Wistia videos have been removed. For the moment, they’re inaccessible. I do have copies of those videos that I saved when I downloaded the books, so I’ll have to see about getting those put up. Unfortunately, getting videos to display online isn’t particularly straightforward, so it will likely take quite a while for me to get them all together. Comment by Andy — 2/15/2015 @ 12:47 pm 5. Hi Andy, Thanks for your work preserving the Creative Commons book archive. I had bought individual chapters from Flatworld for my students under their old business model, but they no longer offer that fee structure and I didn’t want to buy their entire books when I only used a couple chapters, so I stopped assigning the chapters. It is nice to be find the books for use as a reference again. I’m thinking about trying to move more of my reading material over to a Nook or Kobo e-reader. My ancient Kindle just broke, so I need to do something different anyhow. I’m wondering if you have any plans for posting these in the EPUB format or if you have any advice about reading them on a EPUB reader? Thanks, -Jonathan Comment by Jonathan Andreas — 4/2/2015 @ 11:24 am 6. Jonathan Thanks! I don’t have any near-term plans for posting the books in an EPUB format, mostly because of my currently oversubscribed free time. It’s a reasonable idea, and shouldn’t really be all that difficult: EPUB is basically an HTML document anyway. Unfortunately, if I do end up making EPUBs out of the books, I would probably want to spend a bunch of time making sure I got everything right. (Obviously some things, like embedded YouTube videos, wouldn’t transfer well, but most things should be fine.) I’ll put it on my list, at any rate. Sorry about that! Comment by Andy — 4/2/2015 @ 4:50 pm 7. What’s the difference between your two files for “Principles of general chemistry”? What does “1.0M” have that “1.0” doesn’t? Comment by Jay — 4/13/2015 @ 1:10 pm 8. Jay: That’s a good question. At a quick glance, I wasn’t able to see any differences in the table of contents, although it is possible that some of the content is different. I presented them as separate books because they were separate books in the original source of the Creative Commons books, but I wasn’t able to determine why they were separate. Sorry I couldn’t be of more help! Comment by Andy — 4/20/2015 @ 6:57 pm 9. It’s my honor to write and commend you on your effort,even though i could get access to you publication. Comment by IBRAHIM — 4/22/2015 @ 10:06 am 10. Dear owner of 2012books.lardbucket.org/books Website Many thanks for posting books on your website! A student in my class has brought it to my attention that the atomic mass of Ag (silver) in the Periodic Table in section 2.7 in the book “Introduction to Chemistry: General, Organic, and Biological” (v. 1.0) is incorrect. The atomic mass should be 107.868, but it is listed as 196.56655. If this can be corrected, it would be great because if would help students who search for basic information from the web. Thank you very much for paying attention to this issue. Best regards, Youxue Zhang Professor, Univ. Michigan Comment by Youxue Zhang — 9/20/2015 @ 7:38 pm 11. A colleague of mine and I wanted to use a diagram that I found in one of the books from Kurt Lewin in a book we are writing. How do I get permission to use that diagram? Thanks! Comment by Anil Saxena — 9/24/2015 @ 11:12 am 12. IBRAHIM: Thank you! Are you indicating that you can’t view them? If so, can you give a bit more information on what’s not working? Thanks, Youxue Zhang: Thanks for the correction! I’ll try to update it soon. Unfortunately, I’ll have to figure out a way to do so, as I’ve thus far tried to keep everything very similar to the books as published, so it may take a while. Anil Saxena: Thanks for asking. Unfortunately, I don’t have any rights to sublicense the books. If you can work with the Creative Commons license listed on each page, then you don’t need any extra permission. If you need something beyond what’s allowed by that license, you will need to reach out to the current rightsholders. They have wanted to avoid me directly naming them, so I will contact them and see if I can forward along your request. Comment by Andy — 10/26/2015 @ 5:01 pm 13. Hello, Thank you for the work you are doing to make textbooks accessible to all students. I am a community college instructor working to create a free textbook for students. My goal is to combine text from two of the Lardbuck textbooks to create a single composition and analysis book my students can use. My question is about attribution. Do you have any guidelines regarding citing our use of full and partial chapters? For example, should we include source information at the beginning of each chapter, before or after each section borrowed, in the preface, footnotes, in-text citations? I look forward to hearing from you. Comment by Tammy — 3/5/2016 @ 10:53 am 14. Hello, I’m writing to request information about attributing the cc-sa-by textbooks. I’m an instructor at a community college and we are interested in utilizing chapters of several of the textbooks in our attempt to create a first year composition textbook for our students. I am hoping to learn how you expect users to attribute sections of the text (chapter introduction, foot notes, in-text citations, preface). Thank you. Comment by Tammy — 3/7/2016 @ 1:18 pm 15. Hi Andy, The .PDF link for Advertising Campaigns: Start To Finish seems to be broken. I click on the .PDF link but I just get a black screen. Any advice would be appreciated, thank you. Comment by Steve G — 3/9/2016 @ 12:02 pm 16. I have read through the licensing but still have a question. There are chapters of the book that I do not want to use, and I would like to add some of my own material. Can I delete or add material to the book? If so, is there anything that I need to do to modify the book? Thank you for your work on these books! Comment by Lynne — 5/31/2016 @ 8:15 am 17. Sorry for not getting back to everyone sooner! Tammy: Unfortunately, I can’t really offer great citation recommendations, other than “cite them as you would anything else”. The Creative Commons organization has a set of best practices for attribution that may cover what you need. Because the original publisher has asked for the authors’ names and the publisher’s name to be removed from the attribution, you may be best off citing “an unnamed author”. You are welcome to link to my copies of the books as well, as the original publisher has asked that the copies not be linked to them. (I am also not a lawyer, and if you’re concerned about the legal aspects of citing a Creative Commons work, you may wish to talk to a lawyer.) Don’t forget that you can’t use the CC by-nc-sa license commercially. If you’d like me to get you in touch with the original publisher to talk to them about a different license, let me know. Steve G: The full PDF file for Advertising Campaigns: Start to Finish seems to load for me. It’s somewhat large, so it takes a bit to load, but it does successfully load. Can you try again? If it doesn’t work in your browser, try right-clicking the link, saving it, and opening the resulting file instead. Lynne: Yep! The Creative Commons license allows you to modify the book and reuse it, as long as you don’t do so for commercial purposes. To do so, just make sure you attribute the chapters that came from other authors to them. (In this case, that may be to “an unnamed author”: see my response to Tammy just above.) You’ll need to make sure you follow the license that’s linked to from the top of each chapter, but that should be relatively simple. Unfortunately, if you want to sell access to the books, you would need to get a license from the original publisher, as I don’t have the ability to give that permission. If you want to talk with them, let me know, and I can put you in touch. Comment by Andy — 6/17/2016 @ 4:56 pm ## Leave a comment XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>
2018-09-25 17:35:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3089286684989929, "perplexity": 1340.9359213280086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161902.89/warc/CC-MAIN-20180925163044-20180925183444-00327.warc.gz"}
https://quantumcomputing.stackexchange.com/tags/qudit/hot
We’re rewarding the question askers & reputations are being recalculated! Read more. # Tag Info 10 For qubits, we usually base all of our operators on the Pauli matrices. Our basic gate set consists of the Pauli matrices themselves, Clifford gates like $H$ and $S$ that map between Pauli matrices, controlled operations like the CNOT that implement a Pauli on one qubit depending on the Pauli eigenstate of another, etc. For any larger $d$-dimensional ... 7 Determining whether a given state is entangled or not is NP hard. So if you include all possible types on entanglement, including mixed states and multipartite entanglement, there is never going to be an elegant solution. Techniques are therefore defined for specific cases, where the structure of the problem can be used to create an efficient solution. For ... 7 The Hilbert space dimension of $n$ qudits is $d^n$, where $d$ is the dimension of the qudit ($d=2$ for qubit, $d=3$ for qutrit, etc). So three qubits have an $8$ dimensional space, two qutrits have a $9$ dimensional space, and one $d=6$ qudit has a six dimensional space. As such, we cannot regard them as equivalent. I guess you meant to compare situations ... 7 Yes the Hilbert space is the same, but you have to choose the isomorphism $\phi : \; \; (\mathbb{C}^2)^{\otimes 2} \simeq \mathbb{C}^4$. But the different setup will mean some unitaries that will be easy to implement in one setup will be hard in the other. For example, as 2 qubits gates something like $\sigma_z \otimes 1$ will be easy. But if you write that ... 6 I am not really sure about what you mean by "unmeasuring" a qubit, but if you mean to recover the qubit that was measured by manipulating the post-measurement state then I am afraid that the answer is no. When a quantum state is measured, the supoerposition state of such is collpased to one of the possible outcomes of the measurement, and so the qubit is ... 6 To simplify things a bit, let's take a single qubit and a single qutrit for comparison. First, the amplitude damping channel (giving e.g. emission of a photon) for a qubit is $\mathcal E\left(\rho\right) = E_0\rho E_0^\dagger + E_1\rho E_1^\dagger$, where E_0 = \begin{pmatrix}1 && 0 \\ 0 &&\sqrt{1-\gamma}\end{pmatrix}, \quad E_1 = \begin{... 6 The statement in Wikipedia is very generic, and only cites this paper as a reference. Quoting from the abstract of the paper: We demonstrate that decoherence of many-spin systems can drastically differ from decoherence of single spin systems. The difference originates at the most basic level, being determined by parity of the central system, i.e., ... 6 I don't think you'll find a good visual representation. The Bloch sphere for a qubit is a particularly unique coincidence because the number of parameters to represent an arbitrary mixed state is only 1 more than the number of parameters required to represent an arbitrary pure state, and so the pure states can be thought of as the surface to a mixed state's ... 5 As suggested in your Wiki link, the way to detect an entangled state is to find a hyperplane that separates it from the convex set of separable states. This hyperplane represents what is called an entanglement witness. The PPT criterion that you mentioned is one such witness. Now to construct entanglement witnesses for higher dimensional systems is not ... 5 There is no standard name for a qudit for $d>3$. The community has mostly settled on the term qudit (but you will still find qunit or quNit, for example, using $n$ or $N$ instead of $d$ in some older papers). You will find the odd paper where an individual author will pick a name for the $d=4$ case. I’ve certainly seen ququad and ququart. But I think ... 5 The definition you give for a graph state, and in particular the quantum Fourier transform $F$ and the controlled-$Z$ operator — where we take $Z$ to be the unitary generalisation of the Pauli $Z$ operator, satisfying $Z = F XF^\dagger$ for $X$ a shift-by-one permutation operation — are all well-defined even in composite dimension. The Fourier ... 5 A logical qubit is made out of many physical qubits (or qudits), simply selecting a particular two-dimensional subspace. So you can’t make it “exclusively” out of logical qubits because they sit on top of real physical qubits. In fact, if you're thinking about a terminology of "virtual qubits", that is actually best thought of as a synonym for "logical ... 4 Quantum walks are a simple case of quantum dynamics that involves a qubit (named coin in this context) interacting with a high-dimensional qudit (named walker in this context). Almost anything in quantum optics can be thought of as "combining different qunits" as well: a photon in a superposition of many spatial modes (high-dimensional qudit), together with ... 4 This question does not need to be phrased as a quantum question. One can equally ask what classical register can be used to store a string that uniquely identifies each different configuration of the Rubik’s Cube. This is already implicitly answered in the question: you need 27 bits, 14 trits.... However, this is labouring under the assumption that you can ... 4 A fundamental difference between the two kinds of systems is that a two-qubit system can actually be in an entangled state. On the other hand, a single d=4 dimensional system does not possess entanglement, since entanglement is always defined with respect to more than one party. Consequently, for the purposes of quantum protocols that exploit entanglement as ... 4 You may be confusing two uses of the word "base". One definition of "base" has to do with how many digits are used to represent a number. For example, base two uses the digits 0 and 1, and the number five is written as 101 in base two. But in quantum mechanics there is another use of the word "base" which has to do with basis vectors for a vector space. This ... 4 Without additional assumptions or context, there is no fundamental difference between an "$2^n$-dimensional qudit" and "$n$ qubits". Any "qudit system" over $2^n$ modes for some integer $n$ can be thought of as a system of $n$ qubits. Equivalently, an $n$-qubit system is nothing but a $2^n$-dimensional qudit system. The difference is in the fact that if you ... 3 Is quantum computing limited to a superposition of only two states? In theory, it is not. Keep in mind that a qubit is a quantum analogue of the classical "bit" which has only two states $0$ and $1$. In principle, there is no limit to the dimension of the state space of a quantum system. There could even be an "infinite" dimensional separable Hilbert ... 3 You can compute by measuring - see cluster-based quantum computation - but the whole thing that makes measurement different in quantum mechanics is that it destroys the superposition. It can't be undone. Once you measure, the qudit isn't in a state $|\psi\rangle = \alpha|0\rangle + \beta|1\rangle + ... +\gamma|n\rangle$ but in a state $|\psi\rangle = |0\... 3 For a pure state of 8 qubits, the Hilbert space is$2^8$dimensional. Dropping the normalization and phase information means you are left with the space$\mathbb{CP}^{2^8-1}$. Unlike a single qubit which give the Bloch sphere$\mathbb{CP}^{2^1-1}$, this is too big to draw directly. Instead one usually draws simpler spaces that capture the essential features.... 3 Yes. Just to give one example, the PPT criterion is necessary and sufficient to decide whether a state is separable for qubit-qubit and qubit-qutrit systems, but not beyond. 2 I'm not familiar with how graph states extend to qudits, so let me just answer for the specific case of qubits. Consider a graph$G$, and we create the corresponding graph state$|G\rangle$by placing a qubit on every vertex in the$|+\rangle$state, and applying a controlled-phase gate along every edge. Now, take a bipartition of$G$. On either side of ... 2 Short answer: Logical qubits are just an abstraction above physical qubits. A logical qubit is something (see after for examples) that acts like a qubit. Some examples A logical qubit can be: A single physical qubit. This is the case for most of (all?) the quantum chips currently available. In this case, the logical qubit has no advantage over the ... 2 It is incorrect to use modulo arithmetic in this context. Instead finite field arithmetic should be applied. In$\textrm{GF}(4) = \{0, 1, x, x^2\}$where$x^2 = x + 1$and conjugation of$a$is defined as$\bar{a} = a^2$. Addition, multiplication and conjugation tables are then as follows: In this picture we have$0 \equiv 0$,$1 \equiv 1$,$2 \equiv x$, ... 2 Unitary operation is revesible, but measurement is a projection operation, which is not reveaible. Think about matrix inverse, projection matrix has lower rank and does not have inverse 2 The preferred basis problem is essentially something from the many worlds interpretation: If we are to interpret a superposition as representing many universes, what basis should we choose? Since this comes from the foundations of QM, this aspect of your question is perhaps better suited to the physics stack exchange. Is there a preferred basis for a ... 2 There is also a difference if you consider experiments or implementations. To make a physical qubit, I need to use a two-level quantum system. Qudits than require a more complicated quantum system, e.g., with four levels for a d=4 qudit. The engineering justification for using the more complicated system would be that you than require fewer of the four-level ... 2 They are not equivalent. It can be seen by the fact that the system of$3$qubits acts on a$8$dimensional Hilbert space, the 2 qutrit system acts on a$9$dimensional Hilbert space, and the 6 level qunit acts on a$6$dimensional Hilbert space. Consequently, the nature of the states defined by each of the quantum systems is different. This dimension ... 2 For theoretical purposes, I would say that describing two qubits either as exactly that, two qubits ($\mathbb{C}^2\otimes\mathbb{C}^2$), or as a single$d=4$spin, ($\mathbb{C}^4\$) are essentially equivalent, assuming you have universal control over the whole Hilbert space, because it means you can do whatever you want. The distinction is usually most ... Only top voted, non community-wiki answers of a minimum length are eligible
2019-11-18 17:28:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.892714262008667, "perplexity": 401.64342612593686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669809.82/warc/CC-MAIN-20191118154801-20191118182801-00105.warc.gz"}
https://par.nsf.gov/biblio/10373838
This content will become publicly available on September 28, 2023 Self-similar diffuse boundary method for phase boundary driven flow Interactions between an evolving solid and inviscid flow can result insubstantial computational complexity, particularly in circumstances involving varied boundary conditions between the solid and fluid phases. Examples of such interactions include melting, sublimation, and deflagration, all of which exhibit bidirectional coupling, mass/heat transfer, and topological change of the solid-fluid interface. The diffuse interface method is a powerful technique that has been used to describe a wide range of solid-phase interface-driven phenomena. The implicit treatment of the interface eliminates the need for cumbersome interface tracking, and advances in adaptive mesh refinement have provided a way to sufficiently resolve diffuse interfaces without excessive computational cost. However, the general scale-invariant coupling of these techniques to flow solvers has been relatively unexplored. In this work, a robust method is presented for treating diffuse solid-fluid interfaces with arbitrary boundary conditions. Source terms defined over the diffuse region mimic boundary conditions at the solid-fluid interface, and it is demonstrated that the diffuse length scale has no adverse effects. To show the efficacy of the method, a one-dimensional implementation is introduced and tested for three types of boundaries: mass flux through the boundary, a moving boundary, and passive interaction of the boundary with an incident acoustic wave. more » Authors: ; ; Award ID(s): Publication Date: NSF-PAR ID: 10373838 Journal Name: Physics of Fluids ISSN: 1070-6631 3. We present a quasi-incompressible Navier–Stokes–Cahn–Hilliard (q-NSCH) diffuse interface model for two-phase fluid flows with variable physical properties that maintains thermodynamic consistency. Then, we couple the diffuse domain method with this two-phase fluid model – yielding a new q-NSCH-DD model – to simulate the two-phase flows with moving contact lines in complex geometries. The original complex domain is extended to a larger regular domain, usually a cuboid, and the complex domain boundary is replaced by an interfacial region with finite thickness. A phase-field function is introduced to approximate the characteristic function of the original domain of interest. The original fluid model, q-NSCH, is reformulated on the larger domain with additional source terms that approximate the boundary conditions on the solid surface. We show that the q-NSCH-DD system converges to the q-NSCH system asymptotically as the thickness of the diffuse domain interface introduced by the phase-field function shrinks to zero ( $\epsilon \rightarrow 0$ ) with $\mathcal {O}(\epsilon )$ . Our analytic results are confirmed numerically by measuring the errors in both $L^{2}$ and $L^{\infty }$ norms. In addition, we show that the q-NSCH-DD system not only allows the contact line to move on curved boundaries, but also makes the fluid–fluid interfacemore » 5. Transition from laminar to turbulent flow occurring over a smooth surface is a particularly important route to chaos in fluid dynamics. It often occurs via sporadic inception of spatially localized patches (spots) of turbulence that grow and merge downstream to become the fully turbulent boundary layer. A long-standing question has been whether these incipient spots already contain properties of high-Reynolds-number, developed turbulence. In this study, the question is posed for geometric scaling properties of the interface separating turbulence within the spots from the outer flow. For high-Reynolds-number turbulence, such interfaces are known to display fractal scaling laws with a dimension$D≈7/3$, where the 1/3 excess exponent above 2 (smooth surfaces) follows from Kolmogorov scaling of velocity fluctuations. The data used in this study are from a direct numerical simulation, and the spot boundaries (interfaces) are determined by using an unsupervised machine-learning method that can identify such interfaces without setting arbitrary thresholds. Wide separation between small and large scales during transition is provided by the large range of spot volumes, enabling accurate measurements of the volume–area fractal scaling exponent. Measurements show a dimension of$D=2.36±0.03$over almost 5 decades of spot volume, i.e., trends fully consistent with high-Reynolds-number turbulence. Additional observations pertainingmore »
2023-04-02 09:01:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5316229462623596, "perplexity": 1023.4716406970551}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950422.77/warc/CC-MAIN-20230402074255-20230402104255-00602.warc.gz"}
https://labs.tib.eu/arxiv/?author=Sergio%20A.%20Rodr%C3%ADguez-Torres
• ### The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: theoretical systematics and Baryon Acoustic Oscillations in the galaxy correlation function(1610.03506) March 1, 2018 astro-ph.CO We investigate the potential sources of theoretical systematics in the anisotropic Baryon Acoustic Oscillation (BAO) distance scale measurements from the clustering of galaxies in configuration space using the final Data Release (DR12) of the Baryon Oscillation Spectroscopic Survey (BOSS). We perform a detailed study of the impact on BAO measurements from choices in the methodology such as fiducial cosmology, clustering estimators, random catalogues, fitting templates, and covariance matrices. The theoretical systematic uncertainties in BAO parameters are found to be 0.002 in the isotropic dilation $\alpha$ and 0.003 in the quadrupolar dilation $\epsilon$. The leading source of systematic uncertainty is related to the reconstruction techniques. Theoretical uncertainties are sub-dominant compared with the statistical uncertainties for BOSS survey, accounting $0.2\sigma_{stat}$ for $\alpha$ and $0.25\sigma_{stat}$ for $\epsilon$ ($\sigma_{\alpha,stat} \sim$0.010 and $\sigma_{\epsilon,stat}\sim$ 0.012 respectively). We also present BAO-only distance scale constraints from the anisotropic analysis of the correlation function. Our constraints on the angular diameter distance $D_A(z)$ and the Hubble parameter $H(z)$, including both statistical and theoretical systematic uncertainties, are 1.5\% and 2.8\% at $z_{\rm eff}=0.38$, 1.4\% and 2.4\% at $z_{\rm eff}=0.51$, and 1.7\% and 2.6\% at $z_{\rm eff}=0.61$. This paper is part of a set that analyzes the final galaxy clustering dataset from BOSS. The measurements and likelihoods presented here are cross-checked with other BAO analysis in \citet{Acacia16}. The systematic error budget concerning the methodology on post-reconstruction BAO analysis presented here is used in \citet{Acacia16} to produce the final cosmological constraints from BOSS. • ### The mass-size relation of LRGs from BOSS and DECaLS(1802.01596) Feb. 5, 2018 astro-ph.GA We use the DECaLS DR3 survey photometry matched to the SDSS-III/BOSS DR12 spectroscopic catalog to investigate the morphology and stellar mass-size relation of luminous red galaxies (LRGs) within the CMASS and LOWZ galaxy samples in the redshift range $0.2<z<0.7$. The large majority of both samples is composed of early-type galaxies with De Vaucouleurs profiles, while only less than 20% are late-type exponentials. We calibrate DECaLS effective radii using the higher resolution CFHT/MegaCam observations and optimise the correction for each morphological type. By cross-matching the photometric properties of the early-type population with the Portsmouth stellar mass catalog, we are able to explore the high-mass end of the distribution using a large sample of 313,026 galaxies over 4380 deg$^{2}$. We find a clear correlation between the sizes and the stellar masses of these galaxies, which appears flatter than previous estimates at lower masses. The sizes of these early-type galaxies do not exhibit significant evolution within the BOSS redshift range, but a slightly declining redshift trend is found when these results are combined with $z\sim0.1$ SDSS measurements at the high-mass end. The synergy between BOSS and DECaLS has important applications in other fields, including galaxy clustering and weak lensing. • ### The clustering of the SDSS-IV extended Baryon Oscillation Spectroscopic Survey DR14 quasar sample: measurement of the growth rate of structure from the anisotropic correlation function between redshift 0.8 and 2.2(1801.03062) Jan. 9, 2018 astro-ph.CO We present the clustering measurements of quasars in configuration space based on the Data Release 14 (DR14) of the Sloan Digital Sky Survey IV extended Baryon Oscillation Spectroscopic Survey. This dataset includes 148,659 quasars spread over the redshift range $0.8\leq z \leq 2.2$ and spanning 2112.9 square degrees. We use the Convolution Lagrangian Perturbation Theory (CLPT) approach with a Gaussian Streaming (GS) model for the redshift space distortions of the correlation function and demonstrate its applicability for dark matter halos hosting eBOSS quasar tracers. At the effective redshift $z_{\rm eff} = 1.52$, we measure the linear growth rate of structure $f\sigma_{8}(z_{\rm eff})= 0.426 \pm 0.077$, the expansion rate $H(z_{\rm eff})= 159^{+12}_{-13}(r_{s}^{\rm fid}/r_s){\rm km.s}^{-1}.{\rm Mpc}^{-1}$, and the angular diameter distance $D_{A}(z_{\rm eff})=1850^{+90}_{-115}\,(r_s/r_{s}^{\rm fid}){\rm Mpc}$, where $r_{s}$ is the sound horizon at the end of the baryon drag epoch and $r_{s}^{\rm fid}$ is its value in the fiducial cosmology. The quoted errors include both systematic and statistical contributions. The results on the evolution of distances are consistent with the predictions of flat $\Lambda$-Cold Dark Matter ($\Lambda$-CDM) cosmology with Planck parameters, and the measurement of $f\sigma_{8}$ extends the validity of General Relativity (GR) to higher redshifts($z>1$) This paper is released with companion papers using the same sample. The results on the cosmological parameters of the studies are found to be in very good agreement, providing clear evidence of the complementarity and of the robustness of the first full-shape clustering measurements with the eBOSS DR14 quasar sample. • ### The clustering of the SDSS-IV extended Baryon Oscillation Spectroscopic Survey DR14 quasar sample: First measurement of Baryon Acoustic Oscillations between redshift 0.8 and 2.2(1705.06373) Oct. 16, 2017 astro-ph.CO We present measurements of the Baryon Acoustic Oscillation (BAO) scale in redshift-space using the clustering of quasars. We consider a sample of 147,000 quasars from the extended Baryon Oscillation Spectroscopic Survey (eBOSS) distributed over 2044 square degrees with redshifts $0.8 < z < 2.2$ and measure their spherically-averaged clustering in both configuration and Fourier space. Our observational dataset and the 1400 simulated realizations of the dataset allow us to detect a preference for BAO that is greater than 2.8$\sigma$. We determine the spherically averaged BAO distance to $z = 1.52$ to 3.8 per cent precision: $D_V(z=1.52)=3843\pm147 \left(r_{\rm d}/r_{\rm d, fid}\right)\$Mpc. This is the first time the location of the BAO feature has been measured between redshifts 1 and 2. Our result is fully consistent with the prediction obtained by extrapolating the Planck flat $\Lambda$CDM best-fit cosmology. All of our results are consistent with basic large-scale structure (LSS) theory, confirming quasars to be a reliable tracer of LSS, and provide a starting point for numerous cosmological tests to be performed with eBOSS quasar samples. We combine our result with previous, independent, BAO distance measurements to construct an updated BAO distance-ladder. Using these BAO data alone and marginalizing over the length of the standard ruler, we find $\Omega_{\Lambda} > 0$ at 6.6$\sigma$ significance when testing a $\Lambda$CDM model with free curvature. • ### Galaxy clustering dependence on the $\left[\mathrm{O\scriptsize{II}}\right]$ emission line luminosity in the local Universe(1611.05457) Aug. 2, 2017 astro-ph.GA We study the galaxy clustering dependence on the $\left[\mathrm{O\scriptsize{II}}\right]$ emission line luminosity in the SDSS DR7 Main galaxy sample at mean redshift $z\sim0.1$. We select volume-limited samples of galaxies with different $\left[\mathrm{O\scriptsize{II}}\right]$ luminosity thresholds and measure their projected, monopole and quadrupole two-point correlation functions. We model these observations using the 1$h^{-1}\rm{Gpc}$ MultiDark Planck cosmological simulation and generate light-cones with the SUrvey GenerAtoR algorithm. To interpret our results, we adopt a modified (Sub)Halo Abundance Matching scheme, accounting for the stellar mass incompleteness of the emission line galaxies. The satellite fraction constitutes an extra parameter in this model and allows to optimize the clustering fit on both small and intermediate scales (i.e. $r_p\lesssim 30h^{-1}\rm{Mpc})$, with no need of any velocity bias correction. We find that, in the local Universe, the $\left[\mathrm{O\scriptsize{II}}\right]$ luminosity correlates with all the clustering statistics explored and with the galaxy bias. This latter quantity correlates more strongly with the SDSS $r$-band magnitude than $\left[\mathrm{O\scriptsize{II}}\right]$ luminosity. In conclusion, we propose a straightforward method to produce reliable clustering models, entirely built on the simulation products, which provides robust predictions of the typical ELG host halo masses and satellite fraction values. The SDSS galaxy data, MultiDark mock catalogues and clustering results are made publicly available. • ### Clustering of quasars in the First Year of the SDSS-IV eBOSS survey: Interpretation and halo occupation distribution(1612.06918) Feb. 7, 2017 astro-ph.CO In current and future surveys, quasars play a key role. The new data will extend our knowledge of the Universe as it will be used to better constrain the cosmological model at redshift $z>1$ via baryon acoustic oscillation and redshift space distortion measurements. Here, we present the first clustering study of quasars observed by the extended Baryon Oscillation Spectroscopic Survey. We measure the clustering of $\sim 70,000$ quasars located in the redshift range $0.9<z<2.2$ that cover 1,168 deg$^2$. We model the clustering and produce high-fidelity quasar mock catalogues based on the BigMultiDark Planck simulation. Thus, we use a modified (Sub)Halo Abundance Matching model to account for the specificities of the halo population hosting quasars. We find that quasars are hosted by halos with masses $\sim10^{12.7}M_\odot$ and their bias evolves from 1.54 ($z=1.06$) to 3.15 ($z=1.98$). Using the current eBOSS data, we cannot distinguish between models with different fractions of satellites. The high-fidelity mock light-cones, including properties of halos hosting quasars, are made publicly available. • ### Lensing is Low: Cosmology, Galaxy Formation, or New Physics?(1611.08606) Nov. 25, 2016 astro-ph.CO, astro-ph.GA We present high signal-to-noise galaxy-galaxy lensing measurements of the BOSS CMASS sample using 250 square degrees of weak lensing data from CFHTLenS and CS82. We compare this signal with predictions from mock catalogs trained to match observables including the stellar mass function and the projected and two dimensional clustering of CMASS. We show that the clustering of CMASS, together with standard models of the galaxy-halo connection, robustly predicts a lensing signal that is 20-40% larger than observed. Detailed tests show that our results are robust to a variety of systematic effects. Lowering the value of $S_{\rm 8}=\sigma_{\rm 8} \sqrt{\Omega_{\rm m}/0.3}$ compared to Planck2015 reconciles the lensing with clustering. However, given the scale of our measurement ($r<10$ $h^{-1}$ Mpc), other effects may also be at play and need to be taken into consideration. We explore the impact of baryon physics, assembly bias, massive neutrinos, and modifications to general relativity on $\Delta\Sigma$ and show that several of these effects may be non-negligible given the precision of our measurement. Disentangling cosmological effects from the details of the galaxy-halo connection, the effects of baryons, and massive neutrinos, is the next challenge facing joint lensing and clustering analyses. This is especially true in the context of large galaxy samples from Baryon Acoustic Oscillation surveys with precise measurements but complex selection functions. • ### The clustering of galaxies in the SDSS-III Baryon Oscillation Spectroscopic Survey: RSD measurement from the power spectrum and bispectrum of the DR12 BOSS galaxies(1606.00439) Oct. 26, 2016 astro-ph.CO We measure and analyse the bispectrum of the final, Data Release 12, galaxy sample provided by the Baryon Oscillation Spectroscopic Survey, splitting by selection algorithm into LOWZ and CMASS galaxies. The LOWZ sample contains 361\,762 galaxies with an effective redshift of $z_{\rm LOWZ}=0.32$, and the CMASS sample 777\,202 galaxies with an effective redshift of $z_{\rm CMASS}=0.57$. Combining the power spectrum, measured relative to the line-of-sight, with the spherically averaged bispectrum, we are able to constrain the product of the growth of structure parameter, $f$, and the amplitude of dark matter density fluctuations, $\sigma_8$, along with the geometric Alcock-Paczynski parameters, the product of the Hubble constant and the comoving sound horizon at the baryon drag epoch, $H(z)r_s(z_d)$, and the angular distance parameter divided by the sound horizon, $D_A(z)/r_s(z_d)$. After combining pre-reconstruction RSD analyses of the power spectrum monopole, quadrupole and bispectrum monopole; with post-reconstruction analysis of the BAO power spectrum monopole and quadrupole, we find $f(z_{\rm LOWZ})\sigma_8(z_{\rm LOWZ})=0.427\pm 0.056$, $D_A(z_{\rm LOWZ})/r_s(z_d)=6.60 \pm 0.13$, $H(z_{\rm LOWZ})r_s(z_d)=(11.55\pm 0.38)10^3\,{\rm kms}^{-1}$ for the LOWZ sample, and $f(z_{\rm CMASS})\sigma_8(z_{\rm CMASS})=0.426\pm 0.029$, $D_A(z_{\rm CMASS})/r_s(z_d)=9.39 \pm 0.10$, $H(z_{\rm CMASS})r_s(z_d)=(14.02\pm 0.22)10^3\,{\rm kms}^{-1}$ for the CMASS sample. We find general agreement with previous BOSS DR11 and DR12 measurements. Combining our dataset with {\it Planck15} we perform a null test of General Relativity (GR) through the $\gamma$-parametrisation finding $\gamma=0.733^{+0.068}_{-0.069}$, which is $\sim2.7\sigma$ away from the GR predictions. • ### The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: Observational systematics and baryon acoustic oscillations in the correlation function(1607.03145) Oct. 14, 2016 astro-ph.CO We present baryon acoustic oscillation (BAO) scale measurements determined from the clustering of 1.2 million massive galaxies with redshifts 0.2 < z < 0.75 distributed over 9300 square degrees, as quantified by their redshift-space correlation function. In order to facilitate these measurements, we define, describe, and motivate the selection function for galaxies in the final data release (DR12) of the SDSS III Baryon Oscillation Spectroscopic Survey (BOSS). This includes the observational footprint, masks for image quality and Galactic extinction, and weights to account for density relationships intrinsic to the imaging and spectroscopic portions of the survey. We simulate the observed systematic trends in mock galaxy samples and demonstrate that they impart no bias on baryon acoustic oscillation (BAO) scale measurements and have a minor impact on the recovered statistical uncertainty. We measure transverse and radial BAO distance measurements in 0.2 < z < 0.5, 0.5 < z < 0.75, and (overlapping) 0.4 < z < 0.6 redshift bins. In each redshift bin, we obtain a precision that is 2.7 per cent or better on the radial distance and 1.6 per cent or better on the transverse distance. The combination of the redshift bins represents 1.8 per cent precision on the radial distance and 1.1 per cent precision on the transverse distance. This paper is part of a set that analyses the final galaxy clustering dataset from BOSS. The measurements and likelihoods presented here are combined with others in Alam et al. (2016) to produce the final cosmological constraints from BOSS. • ### Galaxy Three-Point Correlation Functions and Halo/Subhalo Models(1608.03660) Aug. 12, 2016 astro-ph.CO, astro-ph.GA We present the measurements of the luminosity-dependent redshift-space three-point correlation functions (3PCFs) for the Sloan Digital Sky Survey (SDSS) DR7 Main galaxy sample. We compare the 3PCF measurements to the predictions from three different halo and subhalo models. One is the halo occupation distribution (HOD) model and the other two are extensions of the subhalo abundance matching (SHAM) model by allowing the central and satellite galaxies to have different occupation distributions in the host halos and subhalos. Parameters in all the models are chosen to best describe the projected and redshift-space two-point correlation functions (2PCFs) of the same set of galaxies. All three model predictions agree well with the 3PCF measurements for the most luminous galaxy sample, while the HOD model better performs in matching the 3PCFs of fainter samples (with luminosity threshold below $L^*$), which is similar in trend to the case of fitting the 2PCFs. The decomposition of the model 3PCFs into contributions from different types of galaxy triplets shows that on small scales the dependence of the 3PCFs on triangle shape is driven by nonlinear redshift-space distortion (and not by the intrinsic halo shape) while on large scales it reflects the filamentary structure. The decomposition also reveals more detailed differences in the three models, which are related to the radial distribution, the mean occupation function, and the velocity distribution of satellite galaxies inside halos. The results suggest that galaxy 3PCFs can further help constrain the above galaxy-halo relation and test theoretical models. • ### Clustering properties of $g$-selected galaxies at $z\sim0.8$(1507.04356) July 26, 2016 astro-ph.CO, astro-ph.GA Current and future large redshift surveys, as the Sloan Digital Sky Survey IV extended Baryon Oscillation Spectroscopic Survey (SDSS-IV/eBOSS) or the Dark Energy Spectroscopic Instrument (DESI), will use emission-line galaxies (ELG) to probe cosmological models by mapping the large-scale structure of the Universe in the redshift range $0.6 < z < 1.7$. With current data, we explore the halo-galaxy connection by measuring three clustering properties of $g$-selected ELGs as matter tracers in the redshift range $0.6 < z < 1$: (i) the redshift-space two-point correlation function using spectroscopic redshifts from the BOSS ELG sample and VIPERS; (ii) the angular two-point correlation function on the footprint of the CFHT-LS; (iii) the galaxy-galaxy lensing signal around the ELGs using the CFHTLenS. We interpret these observations by mapping them onto the latest high-resolution MultiDark Planck N-body simulation, using a novel (Sub)Halo-Abundance Matching technique that accounts for the ELG incompleteness. ELGs at $z\sim0.8$ live in halos of $(1\pm 0.5)\times10^{12}\,h^{-1}$M$_{\odot}$ and 22.5$\pm2.5$% of them are satellites belonging to a larger halo. The halo occupation distribution of ELGs indicates that we are sampling the galaxies in which stars form in the most efficient way, according to their stellar-to-halo mass ratio. • ### The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: cosmological analysis of the DR12 galaxy sample(1607.03155) July 11, 2016 astro-ph.CO We present cosmological results from the final galaxy clustering data set of the Baryon Oscillation Spectroscopic Survey, part of the Sloan Digital Sky Survey III. Our combined galaxy sample comprises 1.2 million massive galaxies over an effective area of 9329 deg^2 and volume of 18.7 Gpc^3, divided into three partially overlapping redshift slices centred at effective redshifts 0.38, 0.51, and 0.61. We measure the angular diameter distance DM and Hubble parameter H from the baryon acoustic oscillation (BAO) method after applying reconstruction to reduce non-linear effects on the BAO feature. Using the anisotropic clustering of the pre-reconstruction density field, we measure the product DM*H from the Alcock-Paczynski (AP) effect and the growth of structure, quantified by f{\sigma}8(z), from redshift-space distortions (RSD). We combine measurements presented in seven companion papers into a set of consensus values and likelihoods, obtaining constraints that are tighter and more robust than those from any one method. Combined with Planck 2015 cosmic microwave background measurements, our distance scale measurements simultaneously imply curvature {\Omega}_K =0.0003+/-0.0026 and a dark energy equation of state parameter w = -1.01+/-0.06, in strong affirmation of the spatially flat cold dark matter model with a cosmological constant ({\Lambda}CDM). Our RSD measurements of f{\sigma}_8, at 6 per cent precision, are similarly consistent with this model. When combined with supernova Ia data, we find H0 = 67.3+/-1.0 km/s/Mpc even for our most general dark energy model, in tension with some direct measurements. Adding extra relativistic species as a degree of freedom loosens the constraint only slightly, to H0 = 67.8+/-1.2 km/s/Mpc. Assuming flat {\Lambda}CDM we find {\Omega}_m = 0.310+/-0.005 and H0 = 67.6+/-0.5 km/s/Mpc, and we find a 95% upper limit of 0.16 eV/c^2 on the neutrino mass sum. • ### The clustering of galaxies in the SDSS-III Baryon Oscillation Spectroscopic Survey: Modeling the clustering and halo occupation distribution of BOSS-CMASS galaxies in the Final Data Release(1509.06404) May 3, 2016 astro-ph.CO, astro-ph.GA We present a study of the clustering and halo occupation distribution of BOSS CMASS galaxies in the redshift range 0.43 < z < 0.7 drawn from the Final SDSS-III Data Release. We compare the BOSS results with the predictions of a Halo Abundance Matching (HAM) clustering model that assigns galaxies to dark matter halos selected from the large BigMultiDark $N$-body simulation of a flat $\Lambda$CDM Planck cosmology. We compare the observational data with the simulated ones on a light-cone constructed from 20 subsequent outputs of the simulation. Observational effects such as incompleteness, geometry, veto masks and fiber collisions are included in the model, which reproduces within 1-$\sigma$ errors the observed monopole of the 2-point correlation function at all relevant scales: from the smallest scales, 0.5 $h^{-1}$ Mpc, up to scales beyond the Baryonic Acoustic Oscillation feature. This model also agrees remarkably well with the BOSS galaxy power spectrum (up to $k\sim1$ $h$ Mpc$^{-1}$), and the Three-point correlation function. The quadrupole of the correlation function presents some tensions with observations. We discuss possible causes that can explain this disagreement, including target selection effects. Overall, the standard HAM model describes remarkably well the clustering statistics of the CMASS sample. We compare the stellar to halo mass relation for the CMASS sample measured using weak lensing in the CFHT Stripe 82 Survey with the prediction of our clustering model, and find a good agreement within 1-$\sigma$. The BigMD-BOSS light-cone including properties of BOSS galaxies and halo properties is made publicly available. • ### Modelling galaxy clustering: halo occupation distribution versus subhalo matching(1508.07012) April 26, 2016 astro-ph.CO, astro-ph.GA We model the luminosity-dependent projected and redshift-space two-point correlation functions (2PCFs) of the Sloan Digital Sky Survey (SDSS) DR7 Main galaxy sample, using the halo occupation distribution (HOD) model and the subhalo abundance matching (SHAM) model and its extension. All the models are built on the same high-resolution $N$-body simulations. We find that the HOD model generally provides the best performance in reproducing the clustering measurements in both projected and redshift spaces. The SHAM model with the same halo-galaxy relation for central and satellite galaxies (or distinct haloes and subhaloes), when including scatters, has a best-fitting $\chi^2/\rm{dof}$ around $2$--$3$. We therefore extend the SHAM model to the subhalo clustering and abundance matching (SCAM) by allowing the central and satellite galaxies to have different galaxy--halo relations. We infer the corresponding halo/subhalo parameters by jointly fitting the galaxy 2PCFs and abundances and consider subhaloes selected based on three properties, the mass $M_{\rm acc}$ at the time of accretion, the maximum circular velocity $V_{\rm acc}$ at the time of accretion, and the peak maximum circular velocity $V_{\rm peak}$ over the history of the subhaloes. The three subhalo models work well for luminous galaxy samples (with luminosity above $L_*$). For low-luminosity samples, the $V_{\rm acc}$ model stands out in reproducing the data, with the $V_{\rm peak}$ model slightly worse, while the $M_{\rm acc}$ model fails to fit the data. We discuss the implications of the modeling results.
2021-01-19 21:26:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6013117432594299, "perplexity": 2434.348788824877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519784.35/warc/CC-MAIN-20210119201033-20210119231033-00481.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-4-quadratic-functions-and-factoring-investigating-algebra-activity-4-7-using-algebra-tiles-to-complete-the-square-draw-conclusions-page-283/2c
## Algebra 2 (1st Edition) $c=0.25b^2$ Using part a) and b) and the fact that we write the $c$'s in the second columns, $c=d^2=(0.5b)^2=0.25b^2$
2023-02-03 06:31:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9923456311225891, "perplexity": 430.5416491523603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00837.warc.gz"}
http://www.onemathematicalcat.org/Math/Precalculus_obj/reflectingPropertyHyperbola.htm
# Reflecting Property of a Hyperbola • PRACTICE (online exercises and printable worksheets) Hyperbolas were introduced in four prior lessons: Hyperbolas (like all conic sections) have an interesting and useful reflecting property. In-a-nutshell: outside light aimed at one focus of the hyperbola is reflected towards the other focus. The purpose of the current section is to explore and understand this reflecting property of a hyperbola. ## Reflecting Property of a Hyperbola In a hyperbola: Outside light directed towards one focus is reflected towards the other focus. You can play with this reflecting property at right: Points $F_1\,$ and $\,F_2\,$ are the foci of the hyperbola. They can be dragged to change the shape of the hyperbola. Point $\,C\,$ can also be dragged to change the shape of the hyperbola. (It controls the hyperbola constant.) Light from outside the hyperbola that is directed towards $\,F_2\,$ hits the hyperbola at point $\,\color{green}{P}\,$. Point $\,P\,$ can be dragged around the hyperbola. The dashed grey line is the tangent to the hyperbola at point $\,P\,$. As discussed in this earlier section, the physics Law of Reflection states that reflected light always makes equal angles with the tangent line. Observe that the reflected light always passes through $\,F_1\,$! This reflecting property has practical applications in optics and navigation. ## Why Does the Hyperbola Reflecting Property Work? The bulk of this section is devoted to understanding why the reflecting property works. We'll use trigonometry and also borrow a result from calculus. The idea is simple, but (as you'll see) carrying it out requires some fortitude! A few preliminary results are needed: ### Trigonometric Identity: Tangent of a Difference For all real numbers $\,x\,$ and $\,y\,$ with $\,\cos(x-y)\ne 0\,$, $$\tan(x-y) = \frac{\tan x - \tan y}{1 + \tan x\tan y}$$ Proof: We want a formula that involves $\,\tan x := \frac{\sin x}{\cos x}\,$ and $\,\tan y := \frac{\sin y}{\cos y}\,$. This fact motivates the form of ‘$\,1\,$’ that we multiply by in the re-naming below: \begin{alignat}{2} \tan(x-y) &= \frac{\sin(x-y)}{\cos(x-y)} &&\text{(definition of tangent)}\cr\cr &= \frac{\sin x\cos y - \cos x\sin y}{\cos x\cos y + \sin x\sin y} &\qquad&\text{(difference formulas for sine and cosine)}\cr\cr &= \frac{\sin x\cos y - \cos x\sin y}{\cos x\cos y + \sin x\sin y}\cdot\frac{\frac{1}{\cos x\cos y}}{\frac{1}{\cos x\cos y}}&&\text{(multiply by \,1\,)}\cr\cr &= \frac{\frac{\sin x\cos y}{\cos x\cos y} - \frac{\cos x\sin y}{\cos x\cos y}}{\frac{\cos x\cos y}{\cos x\cos y} + \frac{\sin x\sin y}{\cos x\cos y}}&&\text{(distributive law)}\cr\cr &= \frac{\tan x - \tan y}{1 + \tan x\tan y}&&\text{(cancel, definition of tangent)} \end{alignat} QED ### Angle of Inclination of a Line DEFINITION angle of inclination of a line The angle of inclination of a horizontal line equals zero. For non-horizontal lines: Every non-horizontal line in an $xy$-plane intersects the $x$-axis in a unique point. This intersection point splits both the line and the $x$-axis: the line has a part above and below the $x$-axis the $x$-axis has a part to the right and to the left of the intersection point The (nonnegative) measure of the angle between: the part of the line above the $x$-axis the part of the $x$-axis to the right of the intersection point is the angle of inclination of the line. Note: Angle of inclination is always in the interval: • $\,[0,180^\circ)\,$ (for degree measure) • $\,[0,\pi)\,$ (for radian measure) ### Relationship Between Slope and Angle of Inclination of a Line Note: If you are visually relating slope and angle of inclination of a line, be sure that ‘$\,1\,$’ on the $x$-axis is the same as ‘$\,1\,$’ on the $y$-axis. Otherwise, uncomfortable things can happen! For example, suppose ‘$\,1\,$’ on the $x$-axis is one mile to the right of zero, but ‘$\,1\,$’ on the $y$-axis is just one inch up from zero. Then, a line with slope $\,1\,$ appears to have an angle of inclination close to zero (instead of $\,45^\circ\,$)! Or, suppose ‘$\,1\,$’ on the $y$-axis is one mile up from zero, but ‘$\,1\,$’ on the $x$-axis is just one inch to the right of zero. Then, a line with slope $\,1\,$ appears to have an angle of inclination close to $\,90^\circ\,$ (instead of $\,45^\circ\,$)! Fact: If $\,\alpha\,$ is the angle of inclination of any non-vertical line with slope $\,m\,$, then: $$\tan\alpha = m$$ In words: the tangent of the angle of inclination of a line equals the slope of the line. Why? horizontal line: $\,\alpha = 0\,$ and $\,m = 0\,$ Thus, $\,\tan\alpha = \tan 0 = 0 = m\,$. line with positive slope: (See top sketch at right.) Here, $\,m > 0\,$. Thus, $\,\tan\alpha = \frac{\text{opp}}{\text{adj}} = \frac{m}{1} = m\,$. line with negative slope: (See bottom sketch at right.) Here, $\,m < 0\,$, so $\,-m > 0\,$. Using the reference angle for $\,\alpha\,$ (in the green triangle), the size of $\,\tan\alpha\,$ is $\, \frac{\text{opp}}{\text{adj}} = \frac{-m}{1} = -m\,$. Since $\,\alpha\,$ is in the second quadrant, the sign of $\,\tan\alpha\,$ is negative. Thus: $$\tan \alpha = \overbrace{(-)}^{\text{sign}}\overbrace{(-m)}^{\text{size}} = m$$ Note: For a vertical line, the angle of inclination is $\,\alpha = 90^\circ\,$. In this case, both slope and $\,\tan\alpha\,$ are undefined. $$\tan(\text{angle of inclination}) = \text{slope of line}$$ Finally, we need a formula for: ### Finding the (Tangent of the) Angle Between Two Intersecting Lines of Known Slope Suppose two non-vertical lines with known slopes intersect. As illustrated at right: Let $\,\alpha_1\,$ and $\,\alpha_2\,$ denote the angles of inclination of the two lines. Re-naming if necessary, assume $\,\alpha_2 \ge \alpha_1\,$. Let $\,m_1\,$ be the slope of the line with angle of inclination $\,\alpha_1\,$. Let $\,m_2\,$ be the slope of the line with angle of inclination $\,\alpha_2\,$. Let $\,\theta\,$ be the angle between the two lines, as shown: $\,\theta := \alpha_2 - \alpha_1$ Then: \begin{alignat}{2} \tan\theta &= \tan(\alpha_2-\alpha_1)&\quad&\text{(definition of \,\theta\,)}\cr\cr &= \frac{\tan\alpha_2 - \tan\alpha_1}{1 + \tan\alpha_2\tan\alpha_1}&&\text{(tangent of a difference)}\cr\cr &= \frac{m_2 - m_1}{1 + m_2m_1}&&\text{(relationship between slope and angle of inclination)} \end{alignat} $$\tan\theta = \frac{m_2 - m_1}{1 + m_2m_1}$$ The line with the greater angle of inclination has slope $\,m_2\,$. ### Set-up/Notation for Understanding the Reflecting Property of a Hyperbola The reflecting property of a hyperbola will be proved for a hyperbola in standard form, with foci on the $x$-axis. As derived in Equations of Hyperbolas in Standard Form, such a hyperbola has: • equation of the form $\,\displaystyle\frac{x^2}{a^2} - \frac{y^2}{b^2} = 1\,$ (where $\,a\,$ and $\,b\,$ are positive constants) • the foci are $\,(c,0)\,$ and $\,(-c,0)\,$, where $\,c\,$ is the positive number satisfying $\,c^2 = a^2 + b^2$ Refer to the sketch at right for additional set-up: Let $\,P(x,y)\,$ be a typical point on the (right branch of) the hyperbola. The ‘outside light directed towards one focus’ is shown as the green line. The slope of the green line, from the focus $\,(c,0)\,$ to $\,P(x,y)\,$, is: $$\text{slope of green line} = \frac{y_2-y_1}{x_2-x_1} = \frac{y-0}{x-c} = \frac{y}{x-c}$$ The red line connects the other focus, $\,(-c,0)\,$, to $\,P(x,y)\,$. The slope of the red line is: $$\text{slope of red line} = \frac{y_2-y_1}{x_2-x_1} = \frac{y-0}{x-(-c)} = \frac{y}{x+c}$$ The tangent line to the hyperbola at $\,P(x,y)\,$ is shown dashed. Borrowing a result from Calculus, the slope of this tangent line is $\displaystyle\,\frac{b^2}{a^2}\,\frac{x}{y}\,$. For those of you who know Calculus, the details are given here. For others—preview the incredible power of Calculus! Use implicit differentiation on $\,\frac{x^2}{a^2} - \frac{y^2}{b^2} = 1\,$: $$\frac{2x}{a^2} - \frac{2y}{b^2}\frac{dy}{dx} = 0$$ Solve for $\frac{dy}{dx}\,$ (the slope): $$\frac{dy}{dx} = \frac{-2x}{a^2}\cdot\frac{b^2}{-2y} = \frac{b^2}{a^2}\frac{x}{y}$$ ### What Needs To Be Shown The outside light ‘hits’ the tangent line (dashed) at $\,P(x,y)\,$ and is reflected. By the physics Law of Reflection: $$\text{the red line is the path of reflected light}\qquad\text{if and only if}\qquad \alpha = \beta$$ In other words, both of the following are true: • If reflected light follows the red line (and thus passes through the other focus), then angle $\,\alpha\,$ equals angle $\,\beta\,$. • If $\,\alpha = \beta\,$, then reflected light follows the red line (and thus passes through the other focus). Therefore: To prove the reflecting property of the hyperbola, we will show that $\,\alpha = \beta\,$. Both $\,\alpha\,$ and $\,\beta\,$ are positive angles, not exceeding $\,90^\circ\,$. The tangent function is one-to-one between $\,0^\circ\,$ and $\,90^\circ\,$, so if $\,\tan\alpha = \tan\beta\,$, it follows that $\,\alpha = \beta\,$. Consequently: To prove the reflecting property of the hyperbola, we will show that $\,\tan\alpha = \tan\beta\,$. Summarizing results for convenience: • [hyperbola] The hyperbola under investigation has equation $\displaystyle\,\frac{x^2}{a^2} - \frac{y^2}{b^2} = 1\,$. The positive number $\,c\,$ satisfies $\,c^2 = a^2 + b^2\,$. • [green line] The green line is outside light directed towards the focus $\,(c,0)\,$. This green line has slope $\displaystyle\,\frac{y}{x-c}\,$, where $\,(x,y)\,$ are the coordinates of the point $\,P\,$ where the green line intersects the hyperbola. • [red line] The red line is between point $\,P(x,y)\,$ and the second focus, $\,(-c,0)\,$. This red line has slope $\displaystyle\,\frac{y}{x+c}\,$. • [tangent/dashed line] The dashed line is the tangent to the hyperbola at $\,P(x,y)\,$. It has slope $\,\displaystyle\frac{b^2}{a^2}\frac{x}{y}\,$. Using the formula for the tangent of the angle between two intersecting lines of known slope: • The angle $\,\alpha\,$ is formed by the intersecting green and tangent lines. The green line has the greater angle of inclination. Thus: $$\tan\alpha = \frac{\frac{y}{x-c} - \frac{b^2}{a^2}\frac{x}{y}}{1 + \frac{y}{x-c}\cdot\frac{b^2}{a^2}\cdot\frac{x}{y}}$$ • The angle $\,\beta\,$ is formed by the intersecting red and tangent lines. The tangent line has the greater angle of inclination. Thus: $$\tan\beta = \frac{\frac{b^2}{a^2}\frac{x}{y} - \frac{y}{x+c}}{1 + \frac{b^2}{a^2}\cdot\frac{x}{y}\cdot\frac{y}{x+c}}$$ Now—breathe deeply—and verify that the following equations are equivalent. The final equation is true, since $\,(x,y)\,$ lies on the hyperbola $\,\displaystyle\frac{x^2}{a^2} - \frac{y^2}{b^2} = 1\,$. Therefore, the first equation is also true, completing the proof! \begin{alignat}{2} \tan\alpha &= \tan\beta &\qquad&\text{(we're trying to show this is true)}\cr\cr \frac{\frac{y}{x-c} - \frac{b^2}{a^2}\frac{x}{y}}{1 + \frac{\color{red}{y}}{x-c}\cdot\frac{b^2}{a^2}\cdot\frac{x}{\color{red}{y}}} &= \frac{\frac{b^2}{a^2}\frac{x}{y} - \frac{y}{x+c}}{1 + \frac{b^2}{a^2}\cdot\frac{x}{\color{red}{y}}\cdot\frac{\color{red}{y}}{x+c}} &&\text{(substitute formulas from above)}\cr\cr \frac{\frac{y}{x-c} - \frac{b^2}{a^2}\frac{x}{y}}{1 + \frac{x}{x-c}\cdot\frac{b^2}{a^2}} &= \frac{\frac{b^2}{a^2}\frac{x}{y} - \frac{y}{x+c}}{1 + \frac{b^2}{a^2}\cdot\frac{x}{x+c}} &&\text{(cancel)}\cr\cr \frac{a^2y(x-c)}{a^2y(x-c)}\left(\frac{\frac{y}{x-c} - \frac{b^2}{a^2}\frac{x}{y}}{1 + \frac{x}{x-c}\cdot\frac{b^2}{a^2}}\right) &= \left(\frac{\frac{b^2}{a^2}\frac{x}{y} - \frac{y}{x+c}}{1 + \frac{b^2}{a^2}\cdot\frac{x}{x+c}}\right)\frac{a^2y(x+c)}{a^2y(x+c)} &&\text{(clear complex fractions)}\cr\cr \frac{a^2y^2 - b^2x(x-c)}{a^2(x-c)y + b^2xy} &= \frac{b^2x(x+c) - a^2y^2}{a^2(x+c)y + b^2xy} &&\text{(multiply out, re-arrange factors)}\cr\cr \bigl(a^2y^2 - b^2x(x-c)\bigr) \bigl(a^2(x+c)y + b^2xy\bigr) &= \bigl(a^2(x-c)y + b^2xy\bigr) \bigl(b^2x(x+c) - a^2y^2\bigr) &&\text{(cross-multiply)}\cr\cr \bigl(a^2y^2 - b^2x^2 + b^2xc\bigr) \bigl(a^2xy + a^2cy + b^2xy\bigr) &= \bigl(a^2xy - a^2cy + b^2xy\bigr) \bigl(b^2x^2 + b^2xc - a^2y^2\bigr) &&\text{(distributive law)}\cr\cr a^4xy^3 \color{purple}{+ a^4cy^3} + a^2b^2xy^3 - a^2b^2x^3y \color{red}{- a^2b^2cx^2y} - b^4x^3y &\color{red}{\,+\, a^2b^2cx^2y} + a^2b^2c^2xy \color{grey}{+ b^4cx^2y}\cr = a^2b^2x^3y \color{orange}{+ a^2b^2cx^2y} - a^4xy^3 \color{orange}{- a^2b^2cx^2y} &- a^2b^2c^2xy \color{purple}{+ a^4cy^3} + b^4x^3y \color{grey}{+ b^4cx^2y} - a^2b^2xy^3 &&\text{(distributive law)}\cr\cr \color{blue}{a^4xy^3} + \color{blue}{a^2b^2xy^3} \color{green}{- a^2b^2x^3y - b^4x^3y} & + a^2b^2c^2xy\cr = \color{green}{a^2b^2x^3y} \color{blue}{- a^4xy^3} &- a^2b^2c^2xy \color{green}{+ b^4x^3y} \color{blue}{- a^2b^2xy^3} &&\text{(delete red/orange/purple/grey terms)}\cr\cr \color{blue}{xy^3(2a^4 + 2a^2b^2)} + \color{green}{x^3y(-2a^2b^2 - 2b^4)} &+ xy(2a^2b^2c^2) = 0 &&\text{(gather like terms on left side)}\cr\cr y^2(a^4 + a^2b^2) + x^2(-a^2b^2 - b^4) &+ a^2b^2c^2 = 0 &&\text{(divide by \,2xy\,)}\cr\cr y^2a^2(a^2 + b^2) - x^2b^2(a^2 + b^2) &+ a^2b^2c^2 = 0 &&\text{(factor)}\cr\cr y^2a^2c^2 - x^2b^2c^2 &+ a^2b^2c^2 = 0 &&\text{(\,a^2 + b^2 = c^2\,)}\cr\cr - y^2a^2 + x^2b^2 &- a^2b^2 = 0 &&\text{(multiply by \,-1\,, divide by \,c^2\,)}\cr\cr x^2b^2 - y^2a^2 &= a^2b^2 &&\text{(re-arrange)}\cr\cr \frac{x^2}{a^2} - \frac{y^2}{b^2} &= 1 &&\text{(divide by \,a^2b^2\,)}\cr\cr \end{alignat} Hooray! Isn't it wonderful when simplicity emerges from seeming chaos? Master the ideas from this section
2019-02-20 08:00:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9902733564376831, "perplexity": 1574.4074512268432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494485.54/warc/CC-MAIN-20190220065052-20190220091052-00287.warc.gz"}
https://math.stackexchange.com/questions/1802401/can-i-get-weak-convergence-in-sobolev-spaces-from-convergence-of-distributions
can I get weak convergence in sobolev spaces from convergence of distributions my question is the following. Given a sequence $(f_k)_k$ in $W^{1,q}(\Omega)$ with $q \in (1,\infty)$ and $\Omega \subseteq \mathbb{R}^n$ open and bounded. If I want to show $f_k$ converges in $W^{1,q}(\Omega)$ weakly, is it then sufficient to show that the corresponding sequence of regular distributions $([f_k])_k$ converges in $D'(\Omega)$, the locally convex space of distributions? I tend to say yes since the testfunctions $D(\Omega)$ are dense in $W^{1,q}(\Omega)$ and by that $W^{1,q}(\Omega)'$ is dense in $D'(\Omega)$. But I'm not quite sure. Is anyone familiar with stuff like this? • If the sequence $\{f_k\}$ is additionally bounded in $W^{1,q}(\Omega)$, the density argument is ok. If it is not bounded, it does not converge weakly. – gerw May 27 '16 at 17:26 • oh sorry I forgot to write that I want to apply this to bounded sequences as you said. – CandyOwl May 27 '16 at 17:28 • Note that $D(\Omega)$ is not dense in $W^{1,q}(\Omega)$ but merely in $W_0^{1,q}(\Omega)$. – gerw May 27 '16 at 17:43 • sure many thanks for reminding me! – CandyOwl May 27 '16 at 17:53 Let us assume that $f_k$ converges to $F \in D'(\Omega)$ in the sense that $$\int_\Omega f_k \, v \, \mathrm{d}x \to F(v) \qquad\forall v \in D(\Omega).$$ Now, since $W^{1,q}(\Omega)$ is reflexive, you get $f \in W^{1,q}(\Omega)$ and a subsequence such that $f_{n_k} \rightharpoonup f$ in $W^{1,q}(\Omega)$. In particular, $f_{n_k} \rightharpoonup f$ in $L^1(\Omega)$ and this shows $$F(v) = \int_\Omega f \, v \, \mathrm{d}x \qquad\forall v \in D(\Omega).$$ It remains to show the convergence of the whole sequence. Suppose that $f_k \not\rightharpoonup f$. Then, there is $\varepsilon > 0$, $\Phi \in W^{1,q}(\Omega)'$ and a subsequence with $|\Phi(f_{m_k} - f) | > \varepsilon$ for all $k$. However, since $\{f_{m_k}\}$ is bounded, it has a weakly convergent subsequence and its weak limit has to be $f$ (same argument as above). This is a contradiction. Hence, the whole sequence converges.
2019-07-19 06:11:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9621067643165588, "perplexity": 90.95141948879674}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526064.11/warc/CC-MAIN-20190719053856-20190719075856-00028.warc.gz"}
https://hsm.stackexchange.com/questions/2520/did-gauss-formulate-or-at-least-know-of-the-full-essence-of-the-gauss-bonnet-t
# Did Gauss formulate, or at least know of, the full essence of the Gauss-Bonnet Theorem? I know that a special case of the Bonnet theorem, called the Theorema Elegantissimum, was proved by Gauss in his 1827 treatise on differential geometry. This was a theorem that dealt with the connection between total curvature and angular deficit, but only in the case of geodesic lines. I know also that he described the notion of geodesic curvature in an unpublished paper. But is there a place (letters, private notes) where he directly referred to the full version of Bonnet theorem, e.g the connection between geometry and topology (in the sense of Euler's characteristic)? Apparently not. On page 463 of Daniel Gottlieb's 1996 article All the Way with Gauss-Bonnet and the Sociology of Mathematics (The American Mathematical Monthly, Vol. 103, No. 6), he writes (according to Morris Hirsch) that Walter Dyck in 1888 was the first to connect the degree of the Gauss map with the Euler-Poincaré characteristic. A few lines later in the same article, Gottlieb writes that Hans Samelson was unable to find a statement of the global Gauss-Bonnet theorem in Gauss' works. Although i've already accepted one answer, i added this answer in order to clarify what is known about Gauss's work towards the general Gauss-Bonnet theorem and what is a matter of speculation; this distinction is not clear in Mark Yasuda's answer, and one might get from it incorrect impression about the roots of Gauss differential geometric ideas. Gauss probably did know the global statement of the Gauss-Bonnet theorem, at least in the case of a simply-connected surface with boundary (like in the case of a spherical surface cutted along a small circle, see figure below; this case requires both the curvature surface integral of the piece of sphere and the line integral along the boundary). However, the Gauss-Bonnet theorem for a general surface requires topological notions like the genus $$g$$ of a surface and it's Euler characteristic $$\chi$$ ($$\chi = 2 - 2g$$ for oriented surface), so confirming Gauss's knowledge of it is merely a matter of speculation about the topological ideas that were slowly crystallizing in Gauss's mind. My indications are based upon reading Oscar Bolza's treatise on Gauss's contibution to the calculus of variations and differential geometry; a section of it is devoted to "the geodetic curvature and the theorem of total curvature". In this section, Bolza analyzes Gauss's unpublshed manuscript entitled "Die Sietenkrummung" ("the side curvature"), which dates back to the years before his publication on differential geometry (the years 1822-1825). Gauss introduces in this manuscript the "extrinsic" definition of the geodesic curvature $$K_g$$ at a given point of a curve embedded in the surface , and gives an "intrinsic" formula for it in term of the coefficients of the first fundamental form $$E,F,G$$ and their first and second derivatives with respect to the curvilinear coordinates $$p,q$$ - this formula proves that the geodesic curvature is an isometric invariant of the surface, in contrast to the apparently extrinsic definition of it (like in the case of the "Theorema Egregium" on Gauss curvature). This result was rediscovered by Ferdinand Minding about a decade later. In addition, Bolza makes the following remarks: In the last section of the work mentioned previously, Gauss gives a strange transformation for the expression he found for the geodesic curvature, which is the actual core not only of Gauss's theorem about total curvature, but also of Bonnet's later generalization (1848). The expression given by Gauss has the following form: $$K_gds = \frac{\sqrt{EG-F^2}(p'dq'-q'dp')}{Ep'^2 + 2Fp'q' + Gq'^2}+\frac{\Phi(p,q,p',q')dt}{(Ep'^2 + 2Fp'q' + Gq'^2)\sqrt{EG-F^2}}$$ where $$\Phi$$ is a certain cubic form in variables $$p', q'$$ whose coefficients are rational functions of $$E,F,G$$ and their first derivatives with respect to $$p,q$$. Bolza continues and describes Gauss derivation of the following formula (i didn't include the derivation): $$K_gds - d\theta = Pdp + Qdq$$ where $$P,Q$$ are the following functions: $$P = \frac{EF_p - \frac{1}{2}EE_q - \frac{1}{2}FE_p}{E\sqrt{EG-F^2}},Q = \frac{\frac{1}{2}EG_p - \frac{1}{2}FE_q}{E\sqrt{EG-F^2}}$$ Bolza preceeds and says that integrating these equations in geodesic polar coordinates yields Bonnet's generalization of the theorem of total curvature (the famous Gauss-Bonnet theorem): $$\int_{\Omega} Kd\sigma = \int_{d\Omega}d\theta - \int_{d\Omega}K_gds$$ Since the boundary of the surface is a closed regular loop, $$\int_{d\Omega}d\theta = 2\pi$$, so this theorem agrees with the fact that the Euler characteristic of a simply-connected surface with boundary (such a surface is topologically equivalent to a closed disk) is $$\chi = 1$$. The final step of integrating the expressions is not made explicitly in Gauss's manuscript. Bolza cocludes with the following words: It can hardly be assumed that this obvious connection could have escaped from Gauss, and so one can probably assume that Gauss was already familiar with Bonnet's generalization of his theorem, as R.V. Lilienthal and P. Stackel have concluded from the fact that Gauss wrote the expression $$K_gds$$ as $$df$$, and reffered to it as "the differential of the curvature of the side". Finally, there is one more piece of evidence to consider - Gauss in his published treatise on differential geometry ("General investigations of curved surfaces", 1828) anounced the publication of further investigations on the "curvature integral" (such a memoir, however, didn't appear); in light of this unpublished work, it's not difficult to guess what he had in mind when he made the anouncement. As an illustration of several ideas expressed in this answer, i've added this figure. A sphere cutted along a small circle is an example of a curved and simply-connected surface with a non-geodesic boundary (boundary with geodesic curvature). The case of Gauss-Bonnet theorem which was probably covered in Gauss's investigations assures that for all geometries (including those with metric, Gauss curvature and/or geodesic curvature different than a punctured sphere) with this topology (i.e, homeomorphic to a sphere with a hole) the sum of the total curvature of the surface and the total geodesic curvature of it's boundary is always $$2\pi$$.
2020-09-19 02:34:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8817716240882874, "perplexity": 435.5805424390173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189928.2/warc/CC-MAIN-20200919013135-20200919043135-00741.warc.gz"}
https://techcommunity.microsoft.com/t5/excel/replace-nv-with-quot-0-quot/td-p/3686121
SOLVED New Contributor # Replace #NV with "0" Hi Experts! I have a (hopefully) simple Problem with replacing #NV with "0" I have a simple Addition =C4+D9+K9 and one of these fields is a "#NV" result, so the response in this field is also "#NV". 1. Question: how can i solve this problem in the above mentioned field. Otherwise: 2. Question: how can i solve the problem in the origin field, so the response will not be "#NV" , but "0" =SVERWEIS(C25;\$1:\$1048576;4;FALSCH) 4 Replies best response confirmed by Hans Vogelaar (MVP) Solution # Re: Replace #NV with "0" =WENNNV(SVERWEIS(C25;\$1:\$1048576;4;FALSCH);0) You can wrap the formula in WENNNV. # Re: Replace #NV with "0" Wow! Thank you very much for the really quick answer. I did it but unfortunately i got a failure message: " too many arguments for this function" ..  Any idea? # Re: Replace #NV with "0" The formula returns the intended result. You can copy and paste the formula into your german Excel version and it will work. Perhaps you didn't copy the formula and manually entered too many arguments. # Re: Replace #NV with "0" Thanky our very much! I don`t know why but it works ! Great job! I`m thinking about the fact that my formula looks like the same, as the copy of your formula. The arguments seems to be the same...
2023-01-27 18:58:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8904635310173035, "perplexity": 2197.3305747959917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495001.99/warc/CC-MAIN-20230127164242-20230127194242-00768.warc.gz"}
https://zbmath.org/0554.65010
# zbMATH — the first resource for mathematics Bessel transforms and rational extrapolation. (English) Zbl 0554.65010 A numerical method is developed which handles the Bessel transform of functions having slow rates of decrease, i.e. $$f(u)=O(u^{-\alpha})$$, $$u\to +\infty$$ $$(\alpha >0)$$ in the Bessel transform $$H_ v(\lambda)=\int^{\infty}_{0}f(u)J_ v(\lambda u)du,v>-1/2.$$ The method replaces $$H_ v$$ by a related damped transform for which the sinc quadrature rule provides an efficient and accurate approximation. It is then shown that the value of $$H_ v(\lambda)$$ can be obtained from the damped transform by extrapolation with the Thiele algorithm. ##### MSC: 65D20 Computation of special functions and constants, construction of tables 65R10 Numerical methods for integral transforms 44A15 Special integral transforms (Legendre, Hilbert, etc.) 44A20 Integral transforms of special functions 33C10 Bessel and Airy functions, cylinder functions, $${}_0F_1$$ Full Text: ##### References: [1] Abramowitz, M., Stegun, I.A.: Handbook of Mathematical Functions. NBS Appl. Math. Ser.55, 375-417. New York: Dover 1964 [2] Crump, K.S.: Numerical inversion of Laplace Transforms using a Fourier series approximation. J. Assoc. Comput. Mach.23, 89-96 (1976) · Zbl 0315.65074 [3] de Balaine, G., Franklin, J.N.: The calculation of Fourier integrals. Math. Comput.20, 570-89 (1966) · Zbl 0196.49102 [4] Erd?lyi, A., Magnus, W., Oberhettinger, F., Tricomi, F.: Tables of Integral Transforms, vol. 1. New York: McGraw-Hill 1954 · Zbl 0055.36401 [5] Erd?lyi, A., Magnus, W., Oberhettinger, F., Tricomi, F.: Tables of Integral Transforms, vol. 2. New York: McGraw-Hill 1954 · Zbl 0055.36401 [6] Henrici, P.: Applied and Computational Complex Analysis, vol. 2. New York: John Wiley 1977 · Zbl 0363.30001 [7] Hildebrand, F.B.: Introduction to Numerical Analysis. (2nd ed.) New York: McGraw-Hill 1974 · Zbl 0279.65001 [8] Longman, I.M.: Note on a method for computing infinite integrals of oscillatory functions. Proc. Camb. Philos.52, 764-68 (1956) · Zbl 0072.33803 [9] Olver, F.W.J.: Asymptotics and Special Functions. New York: Academic Press 1974 · Zbl 0303.41035 [10] Sidi, A.: Extrapolation methods for oscillatory infinite integrals. J. Inst. Math. App.26, 1-20 (1980) · Zbl 0464.65002 [11] Stenger, F.: Numerical methods based on Whittaker cardinal, or sinc functions. SIAM Rev.23, 165-224 (1981) · Zbl 0461.65007 [12] Stenger, F.: Explicit, nearly optimal, linear rational approximation with preassigned poles. (in preparation) · Zbl 0592.41019 [13] Stenger, F.: Optimal convergence of minimum norm approximations inH p . Numer. Math.29, 342-62 (1978) · Zbl 0437.41030 [14] Widder, D.V.: The Laplace Transform. Princeton: University Press 1941 · Zbl 0063.08245 [15] Wuytack, L.: A new technique for rational extrapolation to the limit. Numer. Math.17, 215-221 (1971) · Zbl 0225.65007 [16] Wynn, P.: On a procrustean technique for the numerical transformation of slowly convergent sequences and series. Proc. Camb. Philos.52, 663-71 (1960) · Zbl 0072.33802 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-09-20 00:41:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7597821950912476, "perplexity": 5006.284582287785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056902.22/warc/CC-MAIN-20210919220343-20210920010343-00367.warc.gz"}
https://chadburrus.com/?p=45
by # S520, January 22 Here’s another edition of my notes from my S520 class.  (Sorry, I’m a little behind.) This time, I wrote all the notes in $LaTeX$, both for the challenge and for the practice.  Therefore, today’s entry is a PDF file, not a real blog post.  (Sorry.)  Here are the notes.  Let me know if you find any major errors. Notes for January 22, 2010 By the way, I am still working on the RAMMCAP post I promised last time–it’s just turning out to be a lot longer than I’d originally intended.  I’ll probably break it up into multiple posts next week.
2020-08-03 09:40:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31019675731658936, "perplexity": 1802.667278802998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735792.85/warc/CC-MAIN-20200803083123-20200803113123-00322.warc.gz"}
http://usaco.org/current/data/sol_moop_silver_open20.html
(Analysis by Benjamin Qi) Construct an undirected graph where each vertex represents a moo particle and there exists an edge between two moo particles if they can interact. An interaction corresponds to removing a vertex with at least one adjacent edge. Within each connected component, at least one particle must remain. Conversely, we can show that this is always attainable. Consider a spanning forest of the graph; just keep removing a particle that is a leaf in this forest. It remains to show how to compute the number of connected components in faster than $O(N^2)$. Sort the moo particles in increasing order of $x$ and then $y$. Initially, suppose that each particle is its own connected component. Then while there exist two connected components that are adjacent in the order such that the minimum $y$-coordinate in the left component is at most the maximum $y$-coordinate of the right coordinate, combine them together. For the following input (a combination of the two samples), the answer is 3. 7 1 0 0 1 -1 0 0 -1 3 -5 4 -4 2 -2 After this is done, the $i$-th moo particle in the sorted order is not in the same connected component as the $i+1$-st if and only if $\min(y_1,y_2,\ldots,y_i)>\max(y_{i+1},y_{i+2},\ldots,y_N)$ (which automatically implies that $\max(x_1,x_2,\ldots,x_i)<\min(x_{i+1},x_{i+2},\ldots,x_N)$). So after sorting we only need $O(N)$ additional time to compute the answer. Dhruv Rohatgi's code: #include <iostream> #include <algorithm> using namespace std; #define MAXN 100000 int N; int x[MAXN], y[MAXN]; int cid[MAXN]; int minl[MAXN]; int maxr[MAXN]; bool cmp(int a,int b) { if(x[a]==x[b]) return y[a]<y[b]; return x[a]<x[b]; } int main() { freopen("moop.in","r",stdin); freopen("moop.out","w",stdout); cin >> N; for(int i=0;i<N;i++) { cin >> x[i] >> y[i]; cid[i] = i; } sort(cid,cid+N,cmp); minl[0] = y[cid[0]]; for(int i=1;i<N;i++) minl[i] = min(minl[i-1], y[cid[i]]); maxr[N-1] = y[cid[N-1]]; for(int i=N-2;i>=0;i--) maxr[i] = max(maxr[i+1], y[cid[i]]); int ans = 1; for(int i=0;i<N-1;i++) if(minl[i] > maxr[i+1]) ans++; cout << ans << '\n'; } Related (but harder) problems:
2020-07-13 14:53:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8315179347991943, "perplexity": 1411.094743962694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657145436.64/warc/CC-MAIN-20200713131310-20200713161310-00168.warc.gz"}
http://cccourseworkyvvu.communiquepresse.info/typing-a-thesis-in-latex.html
# Typing a thesis in latex Using latex for writing research papers at first, i also did not feel very comfortable with latex i first did my master degree thesis in latex. Using latex to write a phd thesis - free ebook download as pdf file (pdf), text file (txt) or view presentation slides online. Writing a new mexico tech thesis with latex if you have had help in typing the thesis, you may include a command of this form to give them credit \typist{name. Doing purdue university theses using latex mark (purdue university thesis) latex typesetting system whenever i typed \xn it would be the same as typing. Curricula vitae/résumés a curriculum vitae, otherwise known as a cv or résumé, is a document used by individuals to communicate their work history, education and. Preparing a thesis with latex – rensselaer polytechnic institute document to tex that after typing your thesis into an ascii editor using the latex macros. How do i insert code into a latex document writing code in latex document ask question up vote 304 down vote favorite 141 how do i insert code into a latex. Guidelines for master's theses and doctoral dissertations all typing must fall within the remaining 6” x 9” typing area (except page numbers. What is the best latex template for writing a book or thesis how do i add a page cambridge phd thesis latex what are the best programs for typing latex or. Tips on writing a thesis in latex basic remarks on latex tables, equations, and bibliographic entries while typing the main text different symbols. Download phd thesis for free for phd thesis version management by van minh nguyen a typing tutor that teaches you to touch type thesis latex. Missing \endcsname inserted olc-latex-thesis olc-latex-thesis delete latex latex delete just start typing feedback this. Our academic typing service gives you a way to get your dissertation and thesis typed ★free proofreading 99% accuracy ☆best price.
2018-10-17 11:46:39
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9239491820335388, "perplexity": 4042.163422173234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511173.7/warc/CC-MAIN-20181017111301-20181017132801-00057.warc.gz"}
https://cran.irsn.fr/web/packages/scorepeak/vignettes/intro.html
# Introduction to scorepeak #### 2019-08-20 There are true peaks and false peaks (noise) in time series data. We need to formalize the notion of a peak to distinguish true peaks from false peaks. An approach is to define a peak function, which gives a score of a data point. The score indicates how high the possibility that the data point is a true peak is. We can classify a peak as true or false by the score. scorepeak package provides several peak functions and its building blocks. ## Mathematical Notation $$T=x_1,x_2,\cdots,x_n$$: an univariate uniformly sampled time series data $$x_i$$: $$i^{th}$$ point in T $$n$$: length of time series data $$l$$: window size ($$l\geq3$$ and $$l$$ is odd) $$k$$: half of window size ($$k=\frac{l-1}{2}$$) $$N^r(k,i,T)=<x_{i+1},x_{i+2},\cdots,x_{i+k}>$$: the sequence of right temporal neighbors $$N^l(k,i,T)=<x_{i-k},x_{i-k+1},\cdots,x_{i-1}>$$: the sequence of left temporal neighbors $$N(k,i,T)=N^r(k,i,T){\cdot}N^l(k,i,T)$$: the sequence of left and right temporal neighbors ($$\cdot$$ denotes concatenation) $$N'(k,i,T)=N^r(k,i,T){\cdot}\{x_i\}{\cdot}N^l(k,i,T)$$: the sequence of the data point and its left and right temporal neighbors $$max(A)$$: maximum in a set $$A$$ $$min(A)$$: minimum in a set $$A$$ $$mean(A)$$: average of elements in a set $$A$$ $$s(A)$$: standard deviation of elements in a set $$A$$ ## Peak Functions ### Type 1 Type 1 of peak function $$S_1$$ computes the average of (i) the maximum among the signed distances of $$x_i$$ from its right neighbors and (ii) the maximum among the signed distance of $$x_i$$ from its left neighbors. $$\large S_1(k,i,T)=\dfrac{max_{j{\in}N^l(k,i,T)}(x_i-x_j)+max_{j{\in}N^r(k,i,T)}(x_i-x_j)}{2}$$ ### Type 2 Type 2 of peak function $$S_2$$ computes the signed distance of the $$x_i$$ from the average of its left and right temporal neighbors. $$\large S_2(k,i,T)=x_i-mean(N(k,i,T))$$ ### Type 3 Type 3 of peak function $$S_3$$ computes the product of the signed distance of the $$x_i$$ from the larger one of the average of $$N^l(k,iT)$$ and the average of $$N^r(k,iT)$$ and the standard deviation of $$N'(k,iT)$$. $$\large S_3(k,i,T)=(x_i-max(mean(N^l(k,i,T)), mean(N^r(k,i,T))))*s(N'(k,i,T))$$ ## Example of Use See the data shown below. We can see many local peaks in the data. The points indicate the local peaks. library(scorepeak) data("ecgca102") local_peaks <- detect_localmaxima(ecgca102) idx_true_peaks <- c(239, 255, 387, 439, 625) plot(ecgca102, type = "l", main = "Local Peaks") points(which(local_peaks), ecgca102[local_peaks], col = "red", pch = 19) points(idx_true_peaks, ecgca102[idx_true_peaks], col = "blue", pch = 19) Which local peaks are true peaks? Here, assume that the blue points are true peaks and the red points indicate false peaks. Then, let’s detect the true peaks. First, screen the local peaks. Second, apply a peak function (score_type1 function). local_peaks_screened <- detect_localmaxima(ecgca102, 13) score <- score_type1(ecgca102, 51) plot(ecgca102, type = "l", main = "Screened Local Peaks", ylim = c(-0.38, 0.53)) points(which(local_peaks), ecgca102[local_peaks], col = "red", pch = 19) points(seq(length(score)), score, type = "l", col = "green") Finally, binarize the screened local peaks by a threshold. true_peaks <- score > 0.03 & local_peaks_screened plot(ecgca102, type = "l", main = "Detected True Peaks") points(which(true_peaks), ecgca102[true_peaks], col = "blue", pch = 19) Although we binarized the local peaks by the user-defined threshold value in the above, we can determine threshold value automatically by machine learning if we prepare a train set. By the way, we can combine peak function with clustering algorithm. classified_peaks <- cluster::pam(score, 2, cluster.only = TRUE) cp1 <- classified_peaks == 1 & local_peaks_screened cp2 <- classified_peaks == 2 & local_peaks_screened plot(ecgca102, type = "l", main = "Classified Peaks") points(which(cp1), ecgca102[cp1], col = "red", pch = 19) points(which(cp2), ecgca102[cp2], col = "blue", pch = 19) ## Boundary Condition The above definitions of the peak functions are valid if $$i$$ is greater than $$k$$ and smaller than $$n-k$$. However, the definitions are not valid if $$i$$ is smaller than or equal to $$k$$ and greater than or equal to $$n-k$$. We need to consider how to treat the boundary ($$k>=i,i>=n-k$$). The peak functions in scorepeak package have boundary argument, which determines how to treat boundary. The valid values of boundary are shown below. • “reflecting”, “r”: Reflecting Boundary Condition • “periodic”, “p”: Periodic Boundary Condition ### Reflecting Boundary Condition Extend data reflectively as follows. $$T=x_n,\cdots,x_2,x_1,x_2,\cdots,x_n,x_{n-1},\cdots,x_1$$ ### Periodic Boundary Condition Extend data periodically as follows. $$T=x_1,\cdots,x_n,x_1,x_2,\cdots,x_n,x_1,\cdots,x_n$$ Consider only data points that is not boundary ($$k<i<n-k$$). ## Building Blocks of Peak Functions The peak functions shown above are useful. However, you may need other peak function. You can build your peak function out of building blocks shown below. • max_neighbors: computes maximum of temporal neighbors • min_neighbors: computes minimum of temporal neighbors • mean_neighbors: computes mean of temporal neighbors • sd_neighbors: computes standard deviation of temporal neighbors The above functions have side argument. side determines which temporal neighbors will be used in calculation. The valid values of side are shown below. • “right”, “r”: right temporal neighbors ($$N^r(k,i,T)$$) • “left”, “l”: : left temporal neighbors ($$N^l(k,i,T)$$) • “both”, “b”: left and right temporal neighbors ($$N(k,i,T)$$) • “all”, “a”: the data point and its left and right temporal neighbors ($$N'(k,i,T)$$) ## References Palshikar, Girish. “Simple algorithms for peak detection in time-series.” Proc. 1st Int. Conf. Advanced Data Analysis, Business Analytics and Intelligence. Vol. 122. 2009. Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PCh, Mark RG, Mietus JE, Moody GB, Peng C-K, Stanley HE. PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation 101(23):e215-e220 [Circulation Electronic Pages; http://circ.ahajournals.org/cgi/content/full/101/23/e215]; 2000 (June 13).
2022-05-16 08:05:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5665910243988037, "perplexity": 4535.232817223703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510097.3/warc/CC-MAIN-20220516073101-20220516103101-00141.warc.gz"}
https://math.stackexchange.com/questions/1160408/subring-generated-by-x-is-an-integral-domain-iff-it-is-a-field-iff-the-minimal
# Subring generated by $x$ is an integral domain iff it is a field iff the minimal polynomial of $x$ is irreducible Let $R$ be a ring, $K$ a subfield of $R$, and $x \in R$. Let $F(X)$ be the minimal polynomial of $x$ over $K$. I want to prove that: $K[x]$ is a field $\iff K[x]$ is an integral domain $\iff F(X)$ is irreducible, using the following lemmas: 1. If $B$ is an integral domain and $A$ a subring of $B$ such that $B$ is integral over $A$, then one has equivalence $A$ is a field $\iff B$ is a field. 2. $K[X]/(F(X))$ and $K[x]$ are isomorphic. I need help especially with the last equivalence. How could the minimal polynomial not be irreducible? • Under item(2), do you perhaps mean to ask if $K[X]/(F(X)$ and $K(X)$ are isomorphic? – Robert Lewis Feb 22 '15 at 19:52 The minimal polynomial of say a matrix needn't be irreducible. Consider the minimal polynomial of $\begin{pmatrix}0&1\\0&0\end{pmatrix}$ in $R=M_2(\Bbb R)$. Note that since $k[x]= k[X]/(f(X))$, $k[x]$ is always integral over $k$, for $k$ is integral over $k$; trivially, and $x=\bar X$ is a root of $f$. Hence $k[x]$ is a field if and only if it is a domain, since $k$ is a field. But $f$ irreducible implies $(f)$ is prime, and conversely, since $k[X]$ is a PID, in particular a UFD. You could also use that $k[X]$ is a PID directly, which entails that nonzero primes are maximal, i.e. PIDs (and more generally PIRs) have dimension $1$.
2019-08-24 18:30:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9894400835037231, "perplexity": 94.84476853342423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321351.87/warc/CC-MAIN-20190824172818-20190824194818-00334.warc.gz"}
http://mathhelpforum.com/algebra/137340-logarithm-problem.html
# Math Help - Logarithm problem 1. ## Logarithm problem state the values of x for which the following identity is true log(x+3) + log(x+4) = log(x^2 + 7x + 12) 2. Hint: Log(xy)=log x + log y and If we have log a, a must be >0 3. Originally Posted by ugie state the values of x for which the following identity is true log(x+3) + log(x+4) = log(x^2 + 7x + 12) This is how I would proceed: We rely on the principle that log(xy) = log(x) + log(y). LHS means left-hand-side, RHS right-hand-side. LHS: log(x+3) + log(x+4) = log((x+3)(x+4)) = log(x^2 + 7x + 12) Since this is the same as the RHS, all we need to do is ensure that the quantities (x+3), (x+4) and (x^2 + 7x + 12) are positive -- otherwise, taking their log is undefined. So we are solving the simultaneous inequalities x + 3 > 0 x + 4 > 0 (x+3)(x+4) > 0 4. $log(x+3)+log(x+4)=log\left [(x+3)(x+4)\right ]=log(x^2+7x+12)$ So $x\in(-3,\infty)$, as you can't take the log of a negative value.
2014-09-22 22:58:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8552259802818298, "perplexity": 1332.4474913147906}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137698.52/warc/CC-MAIN-20140914011217-00309-ip-10-234-18-248.ec2.internal.warc.gz"}
https://www.semanticscholar.org/paper/Sums-of-triangular-numbers-and-sums-of-squares-Akbary-Aygin/d984ecfbedddd954d3320aa3e9efe8ee189fb46d
• Corpus ID: 235727752 # Sums of triangular numbers and sums of squares @inproceedings{Akbary2021SumsOT, title={Sums of triangular numbers and sums of squares}, author={Amir Akbary and Zafer Selcuk Aygin}, year={2021} } • Published 2 July 2021 • Mathematics For non-negative integers a, b, and n, let Npa, b;nq be the number of representations of n as a sum of squares with coefficients 1 or 3 (a of ones and b of threes). Let Npa, b;nq be the number of representations of n as a sum of odd squares with coefficients 1 or 3 (a of ones and b of threes). We have that Npa, b; 8n a 3bq is the number of representations of n as a sum of triangular numbers with coefficients 1 or 3 (a of ones and b of threes). It is known that for a and b satisfying 1 ď a… ## References SHOWING 1-10 OF 20 REFERENCES SUMS OF SQUARES AND SUMS OF TRIANGULAR NUMBERS INDUCED BY PARTITIONS OF 8 • Mathematics • 2008 Let rk(n) and tk(n) denote the number of representations of an integer n as a sum of k squares, and as a sum of k triangular numbers, respectively. We prove that $$t_8(n) = \frac{1}{2^{10}\times On the representation of integers as sums of triangular numbers • Mathematics • 1995 SummaryIn this survey article we discuss the problem of determining the number of representations of an integer as sums of triangular numbers. This study yields several interesting results. Ifn ≥ 0 Proofs of Some Conjectures of Z. -H. Sun on Relations Between Sums of Squares and Sums of Triangular Numbers • Mathematics • 2020 Let N ( a , b , c , d ; n ) be the number of representations of n as ax 2 + by 2 + cz 2 + dw 2 and T ( a , b , c , d , n ) be the number of representations of n as$$a\frac{{X(X + 1)}}{2} + A GENERAL RELATION BETWEEN SUMS OF SQUARES AND SUMS OF TRIANGULAR NUMBERS • Mathematics • 2005 Let rk(n) and tk(n) denote the number of representations of n as a sum of k squares, and as a sum of k triangular numbers, respectively. We give a generalization of the result rk(8n + k) = cktk(n), A combinatorial proof of a result from number theory. • Mathematics • 2004 Let rk(n) denote the number of representations of n as a sum of k squares and tk(n) the number of representations of n as a sum of k triangular numbers. We give an elementary, combinatorial proof of Sums of Squares and the Preservation of Modularity under Congruence Restrictions • Mathematics • 2001 If s is a fixed positive integer and n is any nonnegative integer, let r s (8n + s) be the number of solutions of the equation $$x_1^2 + x_2^2 + ... + x_s^2 = 8n + s$$ in integers x 1, x 2,…, x Projections of modular forms on Eisenstein series and its application to Siegel's formula Let k ≥ 2 and N be positive integers and let χ be a Dirichlet character modulo N . Let f(z) be a modular form in Mk(Γ0(N), χ). Then we have a unique decomposition f(z) = Ef (z)+Sf(z), where Ef (z) ∈ Relations between squares and triangles • Computer Science, Mathematics Discret. Math. • 2002 Some New Old-Fashioned Modular Identities • 1998 AbstractThis paper uses modular functions on the theta group to derive an exact formula for the sum \sum\limits_{\left| j \right| \leqslant n^{{1 \mathord{\left/ {\vphantom {1 2}} \right. Extensions of Ramanujan–Mordell formula with coefficients 1 and p Abstract We use properties of modular forms to prove the following extension of the Ramanujan–Mordell formula, z k − j z p j = p χ k − j − 1 p χ k − 1 F p ( k , j ; τ ) + p χ k − p χ k − j p χ k − 1
2022-01-28 09:32:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9300029873847961, "perplexity": 1168.9363827590494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305423.58/warc/CC-MAIN-20220128074016-20220128104016-00064.warc.gz"}
https://latex.org/forum/viewtopic.php?f=50&t=24925&p=84619
## LaTeX forum ⇒ BibTeX, biblatex and biber ⇒ Reference to Citation Information and discussion about BiBTeX - the bibliography tool for LaTeX documents. KOR Posts: 3 Joined: Mon Jun 30, 2014 6:40 pm ### Reference to Citation Hi, I like to reference a citation. which returns c=300000km/s^{$X$} but now I want to write somethich like: For further information see reference X How can I do that Tags: Johannes_B Site Moderator Posts: 4044 Joined: Thu Nov 01, 2012 4:08 pm Can you please expand that to a compilable minimal example? Right now i don't know what you want/need. The smart way: Calm down and take a deep breath, read posts and provided links attentively, try to understand and ask if necessary. KOR Posts: 3 Joined: Mon Jun 30, 2014 6:40 pm Sorry for that \documentclass[12pt,a4paper,draft]{scrartcl}\usepackage[super,square,comma,sort&compress]{natbib}\begin{document}\section{Speed of light}The speed of light is 300000kms$^{-1}.$\cite{Bibliographygoogle}For further details see reference \ref{Bibliographygoogle}.\bibliography{../Bibtex/work.bib}{}\end{document} output should look like this: The speed of light is 300000kms\$^{-1}.(superscript citation number one) For further details see reference (citation number one in normal font size). Last edited by cgnieder on Mon Jun 30, 2014 9:35 pm, edited 1 time in total. Reason: changed code markup from inline to block Johannes_B Site Moderator Posts: 4044 Joined: Thu Nov 01, 2012 4:08 pm First of all, you should have a look at package siunitx. It might be handy for you. Furthermore, natbib is kind of old fashioned. If you have the choice, use biblatex in conjunctin with biber instead. I still don't know exactly, what your ouput should be. But i guess you are looking for something like \citet{<bibkey>} or \citep{<bibkey>}. Best regards Johannes The smart way: Calm down and take a deep breath, read posts and provided links attentively, try to understand and ask if necessary. KOR Posts: 3 Joined: Mon Jun 30, 2014 6:40 pm the \citet command was what I am looking for. Thanks for that. But since I have to use a certain bibliography style (\bibliographystyle{angewchem}) which uses numbers instead of author year. I get author is undefined in citation Bibliographygoogle. Johannes_B Site Moderator Posts: 4044 Joined: Thu Nov 01, 2012 4:08 pm There is a biblatex style for that: biblatex-chem. If you want any further help, please provide a minimal working example. My crystal ball is at the shop. I am not a psychic. The smart way: Calm down and take a deep breath, read posts and provided links attentively, try to understand and ask if necessary.
2019-08-19 14:59:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9328095316886902, "perplexity": 5576.294339977075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314752.21/warc/CC-MAIN-20190819134354-20190819160354-00540.warc.gz"}
http://mathhelpforum.com/math-topics/61447-binomial-theorem-questions.html
# Math Help - Binomial Theorem - Questions 1. ## Binomial Theorem - Questions 32. For the expansion of (k+t)^22, state: a) the number of terms (1 mark) b) the degree of each term (1 mark) c) the first four terms in the expansion, without coefficients (2 marks) 33. Find the first five terms in the expansion of (r-2s)^8. (5 marks) 34. Find the term not involving 'y' in the expansion of (1÷y^3 + y^2)^5. (4 marks) Any help on any of the above questions would be greatly, greatly appreciated. They are questions from a Grade 12 Data Management course that I am learning on my own. 2. 32 a) (x=y)^n has n+1 terms. So (k+t)^22 has 23 terms. b) Every term in the expansion is of the form $k^{22-i}t^i$ (this is ignoring the coefficients). So their degree is (22-i)+i=22. c) If we ignore coefficients, we can use what we said in part b) with i=0, 1, 2, 3. This gives $k^{22},\, k^{21}t,\, k^{20}t^2,\, k^{19}t^3$ 33. Using the binomial expansion formula: $ \sum_{i=0}^8 \begin{pmatrix} 8 \\ i \end{pmatrix} r^{8-i}(-2s)^i $ So, the first five terms would be $r^8, \, -16r^7s, \, 112r^6s^2, \, -448r^5s^3, \, and \,\,\,1120r^4s^4$. 34. Ignoring the coefficients, every term will be of the form $y^{-3(5-i)}y^{2i}=y^{5i-15}$. So, if we want the term without y, we need the exponant to be zero, thus 5i-15=0 or i=3. Now all we have to do is calculate the coefficent for i=3, which is simply $\begin{pmatrix} 5 \\ 3 \end{pmatrix}=10$. Hope that helps. 3. Thanks a lot! I was able to figure out most of them on my own, but your input definitely helped put it all together. I just have one more question that I was wondering if you could guide me through.. Find the term involving 'y' in the expansion of (1÷ y^3 + y^2)^5 4. That one seems strange, because there is no term with 'y'. Because if we follow the same logic as in 34, we need the exponant to be 1, thus 5i-15=1 or i=16/5, which isn't an integer. So either the question is wrong or they just want you to say 0. 5. Is it at all possible for you to show your work for questions 33 and 34, so that I can get an idea of how my between stuff compares to yours? (I set up my equation a bit differently) Thank you!! 6. Sure. 33. Using the binomial expansion formula $\sum_{i=0}^8 \begin{pmatrix} 8 \\ i \end{pmatrix} r^{8-i}(-2s)^i$, the first term is i=0, the second is i=1, and so on. 1st term (i=0): $\begin{pmatrix} 8 \\ 0 \end{pmatrix} r^{8-0}(-2s)^0=(1)r^8(1)=r^8$ 2nd term (i=1): $\begin{pmatrix} 8 \\ 1 \end{pmatrix} r^{8-1}(-2s)^1=(8)r^7(-2s)=-16r^7s$ 3rd term (i=2): $\begin{pmatrix} 8 \\ 2 \end{pmatrix} r^{8-2}(-2s)^2=(28)r^6(4s^2)=112r^6s^2$ 4th term (i=3): $\begin{pmatrix} 8 \\ 3 \end{pmatrix} r^{8-3}(-2s)^3=(56)r^5(-8s^3)=-448r^5s^3$ 5th term (i=4): $\begin{pmatrix} 8 \\ 4 \end{pmatrix} r^{8-4}(-2s)^4=(70)r^4(16s^4)=1120r^4s^4$ 34. The binomial expansion formula gives us $\sum_{i=0}^5 \begin{pmatrix} 5 \\ i \end{pmatrix} (1/y^3)^{5-i}(y^2)^i=\sum_{i=0}^5 \begin{pmatrix} 5 \\ i \end{pmatrix} y^{-3(5-i)}y^{2i}=\sum_{i=0}^5 \begin{pmatrix} 5 \\ i \end{pmatrix} y^{-15+3i+2i}=\sum_{i=0}^5 \begin{pmatrix} 5 \\ i \end{pmatrix} y^{-15+5i}$ I showed how I got that i=3 is the term we're looking for (-15+5i=0 or i=3), so let's evaluate the term i=3 (which is the 4th term): $\begin{pmatrix} 5 \\ 3 \end{pmatrix} y^{-15+5(3)}=(10) y^{-15+15}=10y^0=10(1)=10$ Hope that's detailed enough
2015-01-28 16:57:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.806623637676239, "perplexity": 756.8605209274274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121983086.76/warc/CC-MAIN-20150124175303-00132-ip-10-180-212-252.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/122687/does-heisenberg-equation-of-motion-imply-the-schrodinger-equation-for-evolution
# Does Heisenberg equation of motion imply the Schrodinger equation for evolution operator? Let us choose to postulate (e.g. considering the analogy of the Hamiltonian being a generator of time evolution in classical mechanics) $$i\hbar \frac{d\hat{U}}{dt}=\hat{H}\hat{U}\tag{1}$$ where $\hat{U}$ is the (unitary, linear) evolution operator and $\hat{H}$ the Hamiltonian (the most general version of which; i.e. time-dependent with instances at different times non-commuting). In S-picture, one can easily show that (1) is equivalent to $$i\hbar \frac{d}{dt}|\psi_S(t)\rangle=\hat{H}_S|\psi_S(t)\rangle\tag{2}$$ where $\psi$ is a state, $|\psi_S(t)\rangle = \hat{U}(t)|\psi_S(0)\rangle \equiv \hat{U}(t)|\psi_H\rangle$ and $\hat{H}_S:=\hat{H}$. In H-picture, it is straightforward to show that (1) implies $$\frac{d\hat{A}_H}{dt} = \frac{\partial\hat{A}_H}{\partial t}+\frac{1}{i\hbar}[\hat{A}_H,\hat{H}_H]\tag{3}$$ where $A$ is an observable and $\hat{A}_H(t)=\hat{U}(t)^\dagger \hat{A}_H(0)\hat{U}(t)\equiv\hat{U}(t)^\dagger \hat{A}_S\hat{U}(t)$ (and also $[\hat{H}_{H},\hat{H}_H]=0$ implying that the time dependence of $\hat{H}_H$ is purely explicit, i.e. $\hat{H}_H=\hat{H}_S\equiv \hat{H}$ with $[\hat{H},\hat{U}]=0$). My question: is it possible to obtain (1) from (3), i.e. to show that (1) is equivalent to (3)? Some thoughts on this: it is extensively mentioned in literature that both pictures yield same answers. Therefore, it should be possible to obtain (1) from (3) since (1) and (2) are equivalent. Assuming (3), the best I can get is that given an observable $A$, the operator $$\hat{C}:= \hat{A}_S\left(\frac{d\hat{U}}{dt} \hat{U}^\dagger - \frac{\hat{H}}{i\hbar}\right)$$ must be skew-Hermitian. - LaTeX tip: use, e.g. \tag{1} to label your equations. For other configurations, it may be \eqno(1). –  JamalS Jul 1 '14 at 21:53 @JamalS: Thanks :) –  Kubav Jul 1 '14 at 21:56 The result can be proved in a general way using, well, math. In particular the theory of semigroups of linear operators on Banach spaces (I know that seems advanced and maybe not physical, but it is an elegant way of proving the result you seek ;-) ). Define the Banach space $\mathscr{L}^1(\mathscr{H})$ of trace class operators over a separable Hilbert space the set of all bounded operators $u$ such that $\mathrm{Tr}\lvert u\rvert<\infty$. The fact that (3) holds for all observables (I will not discuss about domains here for the sake of simplicity) implies that it holds also for trace class operators that does not depend explicitly on time. In this case (3) is equivalent to the following Cauchy problem on the Banach space $\mathscr{L}^1(\mathscr{H})$: $$\frac{du(t)}{dt}=L u(t)\;, \; u(0)=x$$ where $\mathscr{L}^1(\mathscr{H})\ni x\equiv \hat{A}_0$ that does not depend on time, $u(t)\equiv \hat{A}_H(t)$ and $L$ is the linear operator that acts as $i[\hat{H},\cdot]$ (I am assuming $\hslash=1$). If $x\in D(L)$, i.e. $x$ such that $\mathrm{Tr}\lvert [\hat{H},x]\rvert <\infty$, the solution of the Cauchy problem above is unique, because $H$ is self-adjoint. We know by Stone's theorem an explicit solution, namely $$u(t)=\hat{U}(t)x \hat{U}^\dagger(t)\; ,$$ where $\hat{U}(t)=e^{-it\hat{H}}$ is the unitary group generated by $\hat{H}$, that satisfies (1) on $D(\hat{H})$. That solution is unique, so assuming (3) for all observables implies that the operator $\hat{U}$ you used to define Heisenberg picture operators has to be exactly the group generated by $\hat{H}$, i.e. satisfy (1). Just for the sake of completeness: once you have solved the Cauchy problem for $x\in D(L)$, you can extend the solution to trace class operators or compact operators or bounded operators; also to unbounded operators (provided $\hat{U}(t)x \hat{U}^\dagger(t)$ makes sense on some dense domain). This is what is called a mild solution of the Cauchy problem, because we don't know a priori if we are allowed to take the derivative. However uniqueness is usually proved under general assumptions and for mild or even weak solutions, so I think it is quite safe to conclude that $\hat{U}(t)x \hat{U}^\dagger(t)$ is the unique solution of (3) in a suitable sense. - Does Stone's theorem in the above mentioned form apply in the general case, where $\hat H$ is time-dependent where the Hamiltonians at different times need not commute? Also, are trace class operators that do not depend explicitly on time guaranteed to be self-adjoint (so that they really satisfy (3))? (This goes beyond my current knowledge, apologies if my questions are rubbish.) –  Kubav Jul 2 '14 at 10:09 Or, is there a general result saying that if (3) is satisfied by all observables, then it satisfied by all trace class operators with no explicit time dependence? –  Kubav Jul 2 '14 at 10:29 No, Stone's theorem does not hold for time dependent generators; however also your equation (1) is not precise when the generators are time dependent. In particular you would then obtain a two-parameter group, that is well defined only with some assumptions on the aforementioned generators, and you have a right derivative different from the left derivative. Everything gets very very technical in that case, but a similar reasoning could be replied in principle. –  yuggib Jul 2 '14 at 10:32 As you wrote, the trace class operators are general, and everything holds in particular for self-adjoint trace class operators. Since the solution is self-adjoint, provided the initial $x$ is self-adjoint, you have a closed result that preserves self-adjointness. –  yuggib Jul 2 '14 at 10:32 Well, the evolution can be rigorously defined by means of the Dyson expansion for bounded $H(t)$. When, as usual in QM, $H(t)$ are not bounded, a lot of problems may arise in defining a two-parameter unitary group generated by them. Most of the problems are linked to the domains of definition of $H(t)$ for each $t$. For example it is a priori possible that $D(H(t_1))\cap D(H(t_1+1))=\emptyset$. In this case you cannot apply in succession $H(t_1+1)H(t_1)$ because it is not defined, and so you see that it would not be possible to define the time-ordered exponential. –  yuggib Jul 2 '14 at 11:13 CAUTION - ANSWER INCOMPLETE There is a gap in my argument (see the send); it relies on the claim that \begin{align} - \hat O^\dagger \hat A = \hat A\hat O \end{align} for all hermitian $\hat A$ implies $\hat O = 0$ which may not be true. Please comment if you know how to prove this or know of a counterexample. Update. Actually the claim above is definitely false in one dimension, so the ensuing argument is certainly incomplete. Some notational clarifications. Let me first that (3) as you wrote it, although very much standard, is really a rather severe abuse of notation. The difference between the "total" and "partial" derivatives in the equation is that the partial derivative term is supposed to reference the time-dependence carried by the Schrodinger picture operator itself, while the total derivative refers to that plus the additional time-dependence introduced by conjugating the operator by $\hat U(t)$. To see this, note that if as usual we define \begin{align} \hat A_H(t) = \hat U^\dagger(t) \hat A_S(t) \hat U(t) \tag{$\star$} \end{align} then differentiation with respect to time on both sides and invoking (1) yields \begin{align} \frac{d\hat A_H}{dt}(t) = U^\dagger(t)\frac{d \hat A_S}{dt}(t)\hat U(t) + \frac{1}{i\hbar} [\hat U^\dagger(t)\hat A_S(t) \hat U(t), \hat H] \tag{$\star\star$} \end{align} so if we feel like good physicists who like using partial derivative symbols in rather odd ways and define \begin{align} \frac{\partial \hat A_H}{\partial t}(t) = U^\dagger(t)\frac{d \hat A_S}{dt}(t)\hat U(t) \end{align} then we get precisely your equation (3). Proof that (3) $\implies$ (1). All right, so now that we know what that equation is really saying. Let's try to use it to prove (1) as you desire. We start with $(\star)$ and $(\star\star)$ and try to prove (1). In fact, plugging the $(\star)$ into $(\star\star)$ and canceling the common term yields \begin{align} \frac{d \hat U^\dagger}{dt}(t) \hat A_S(t)\hat U(t) + \hat U^\dagger(t) \hat A_S(t) \frac{d\hat U}{dt}(t) = \frac{1}{i\hbar} [\hat U^\dagger(t)\hat A_S(t) \hat U(t), \hat H] \end{align} Expanding out the commutator, and multiplying both sides by $\hat U(t)$ on the left, and $\hat U^\dagger(t)$ on the right, we find that \begin{align} \hat U(t) \frac{d\hat U^\dagger(t)}{dt} \hat A_S(t) + \hat A_S(t) \frac{d \hat U}{dt}(t) \hat U^\dagger(t) = -\frac{1}{i\hbar}\hat U(t)\hat H \hat U^\dagger (t) \hat A_S(t) + \frac{1}{i\hbar}\hat A_S(t) \hat U(t) \hat H \hat U^\dagger (t). \end{align} which, upon a some rearrangement gives \begin{align} \left(\hat U(t) \frac{d\hat U^\dagger(t)}{dt} + \frac{1}{i\hbar}\hat U(t)\hat H \hat U^\dagger (t)\right)\hat A_S(t) =\hat A_S(t)\left(\frac{1}{i\hbar}\hat U(t) \hat H \hat U^\dagger (t)-\frac{d \hat U}{dt}(t) \hat U^\dagger(t)\right) \end{align} Now let the term in parentheses on the right be called $\hat O(t)$, then using the fact that $\hat A_S(t)$ is hermitian, this equation can be written as \begin{align} -\hat O^\dagger(t) \hat A_S(t) = \hat A_S(t) \hat O(t) \end{align} This holds for all hermitian $\hat A_S(t)$, so $\hat O(t) = 0$, which is to say that \begin{align} \frac{1}{i\hbar}\hat U(t) \hat H \hat U^\dagger (t)-\frac{d \hat U}{dt}(t) \hat U^\dagger(t) =0 \end{align} and (1) follows upon multiplying both sides on the left by $\hat U^\dagger(t)$. - Thank you for your answer. Unfortunately, I don't see how it follows from $-\hat O^\dagger(t) \hat A_S^\dagger(t) = \hat A_S(t) \hat O(t)$ that $2(\hat A_S(t)\hat O(t))^\dagger = 0$ unless $\hat A_S(t)\hat O(t)$ is Hermitian, which I don't think is the case in general. Could you please clarify? Thanks. –  Kubav Jul 2 '14 at 1:03 @Kuba You're completely right; that was sloppy. Let me think about this for a moment. If I can't salvage the answer, then I will delete it. –  joshphysics Jul 2 '14 at 1:07 @joshphysics This might just me missing something obvious, but does $AO+(AO)^\dagger =0$ really imply $O=0$? –  Danu Jul 2 '14 at 1:49 @joshphysics Thanks a lot for responding (and I see now that you edited the answer accordingly as well). We all know how scary it is not knowing for sure whether one is asking a smart question or a really stupid one! I would certainly advise against deleting; it is quite interesting to me already! –  Danu Jul 2 '14 at 1:53 @joshphysics A much better argument: $\widehat O^\dagger A + A\widehat O = 0$ for every Hermitian operator $A$ implies (for $A = I$) that $\widehat O$ is skew hermitian, and the condition becomes $[\widehat O, A] = 0$ for every Hermitian $A$. Hence the skew-Hermitian $\widehat O$ has a common basis of eigenvectors with every Hermitian $A$. Since we are free in our choice of $A$, every vector is an eigenvector of $\widehat O$, so $\widehat O$ has to be a scalar, purely imaginary because it is skew-Hermitian, and corresponds to a constant shift in potential energy in the Hamiltonian. –  doetoe Jul 2 '14 at 16:19 Just calculate (3) for $\hat{A}_H=\hat{U}_H$. With $\hat{U}_H(t)=\hat{U}^{\dagger}(t)\hat{U}(t)\hat{U}(t)=\hat{U}(t)$ and $\hat{H}_H=\hat{H}$ (as you said) you get: $\frac{d\hat{U}_H}{dt}=\frac{\partial \hat{U}}{\partial t}+\frac{1}{i\hbar}\underbrace{[\hat{H},\hat{U}]}_{=0 \text{(as you said)}}\stackrel{\text{$\hat{U}=e^{\hat{H}t/(i\hbar)}$}}{=}\frac{1}{i\hbar}\hat{H}\hat{U}$ In words: $\hat{U}$ commutes with $\hat{H}$ and thus the total derivative of $\hat{U}$ equals it's partial derivative, which I can calculate to be $\frac{1}{i\hbar}\hat{H}\hat{U}$. So (1) is implied in (3). - It does not seem sufficient, because a priori you do not know that $U=e^{-itH}$ (if else is trivial). And $[H,U]$ is zero whenever $U$ is a function of $H$, as defined by the spectral theorem. –  yuggib Jul 2 '14 at 8:06 But then eqn. (1) and (3) aren't connected, as long as I don't choose U to be defined like that. Or how else would you find (3)? –  PeMa Jul 2 '14 at 8:26 You assume (3) to hold for all observables, without knowing the form of $U(t)$. –  yuggib Jul 2 '14 at 8:46 This is not an assumption. This you can proof. That's what the OP called "straight forward" and in this proof you already use $\partial_t\hat{U}$ –  PeMa Jul 2 '14 at 9:20 This is one side of the proof, namely (1)$\Rightarrow$(3). If you want to prove the converse, i.e. (3)$\Rightarrow$(1), you have to assume (3) without knowing (1)!! –  yuggib Jul 2 '14 at 9:22 I think you need to add a postulate to (3) in order to obtain (1). This postulate it's actually assumed in every picture of the dinamic and it is that the evolution operator is a one group parameter. It is such a natural assumption that it comes no harm in taking it for true. Even in classical hamiltonian mechanics the flow in the coordinate space form a group. Eq(3) on its own, as prievously stated, substantially tells you that $[U,H] = 0$ and then you can put $U(t) = f(\hat{H},t)$ for an opportune function $f$, via spectral theorem. This of course may be expressed in full mathematical rigour at will. Now if you ask $U$ to be a group (and take $H$ time independent for sake of semplicity), it should be easy to prove that $f( - , t)$ must be $exp(i (-) t)$ (the i comes from the request of unitarity). After all, if you write your problem in a eigenbasis for $U$ and $H$, you are looking for a function $f(E,t)$ such that $f(E,t) f(E,t') = f(E,t+t')$, for each eigenvalue $E$ of $\hat{H}$ and each $t,t'$. (Note that now the product is the one in $\mathbb{R}$ not the composition of operators!) - As I explained in my answer, the unique solution of (3) (for trace class operators, but actually in any Banach space) has to be the unitary group generated by $H$, it is not necessary to assume the group property a priori. –  yuggib Jul 2 '14 at 9:54 Only now I've fully understood your answer, that seems decisive to me now. Do you think that my answer answers to the question (sorry for the repetition) "How to obtain (1) from Heisenberg picture, i.e. A_h(t) = U(t) A_h(0) U^{dagger}(t) with some physical assumptions: H does not evolve, U is a group"? This is not the OP question that instead is "how to obtain (1) in Heisenberg picture if we assume (3) (and nothing else)", right? –  giulio bullsaver Jul 2 '14 at 10:41 Well, there is a one to one correspondence between groups and self-adjoint operators that generate them (Stone's Theorem). So asking $U$ is a group it means that $U$ satisfies (1) only with its generator on the r.h.s. In some sense then postulating the Heisenberg picture let you know that the sought generator is effectively $H$. –  yuggib Jul 2 '14 at 11:07
2015-07-31 11:38:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 9, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9761912822723389, "perplexity": 420.67713891409943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988250.59/warc/CC-MAIN-20150728002308-00260-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/show-that-all-roots-x-1-6-x-1-6-0-are-given-icot-2-k-1-n-12-where-k-0-1-2-3-4-5-powers-roots-trigonometric-functions_57568
Share # Show that All Roots of ( X + 1 ) 6 + ( X − 1 ) 6 = 0 Are Given by -icot ( 2 K + 1 ) N 12 Where K=0,1,2,3,4,5. - Applied Mathematics 1 ConceptPowers and Roots of Trigonometric Functions #### Question Show that all roots of (x+1)^6+(x-1)^6=0 are given by -icot((2k+1)n)/12where k=0,1,2,3,4,5. #### Solution (x+1)^6+(x-1)^6=0 ∴ (x+1)^6=(-x-1)^6 ∴ (x+1)^6/(x-1)^6=-1 ∴ ((x+1)/(x-1))^6=e^(ipi)         {∵ e^(ipi)=cospi+isinpi=-1+i(0)=-1}      (Principal value) ∴ ((x+1)/(x-1)^6=e^(i(pi+2kpi))         , k=0,1,2,3,4,5 ∴ (x+1)/(x-1)=e(ipi(1+2k)/6) Let 2θ= (pi(1+2k))/6 ∴ From (1) & (2), (x+1)/(x-1)=e^(i2θ) ∴ By Componendo – Dividendo,( (x+1)+(x-1))/((x+1)-(x-1))=(e^12θ+1)/(e^12θ-1) ∴ 2x/2= (e^(iθ)[e^(iθ)+e^(-iθ)])/(e^(iθ)[e^(iθ)-e^(-iθ)]) ∴ x=(2cosθ)/(2isinθ)      {∵sinθ=(e^(iθ)-e^(-iθ))/(2i) and cos θ= (e^(iθ)+e^(-iθ))/(2) } ∴ x=1/i cotθ ∴ x=-icot  ((2k+1)n)/12(From 2) where k = 0, 1, 2, 3, 4, 5 `} Is there an error in this question or solution? #### APPEARS IN Solution Show that All Roots of ( X + 1 ) 6 + ( X − 1 ) 6 = 0 Are Given by -icot ( 2 K + 1 ) N 12 Where K=0,1,2,3,4,5. Concept: Powers and Roots of Trigonometric Functions. S
2020-03-29 11:25:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7658568024635315, "perplexity": 9652.0672107447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494331.42/warc/CC-MAIN-20200329105248-20200329135248-00009.warc.gz"}
https://www.attorneyservice.com/portal/
This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage This Area represents the webpage
2021-01-16 00:15:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9604899287223816, "perplexity": 8687.520601608105}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703497681.4/warc/CC-MAIN-20210115224908-20210116014908-00247.warc.gz"}
https://www.homebuiltairplanes.com/forums/threads/check-out-this-lawn-mower-engine-kohler-engine.32898/page-2
# check out this lawn mower engine (kohler engine) Discussion in 'General Experimental Aviation Questions' started by oldcrow, Jan 9, 2020. ### Help Support HomeBuiltAirplanes Forum by donating: 1. Jan 14, 2020 ### litespeed #### Well-Known Member Joined: May 21, 2008 Messages: 1,550 320 Location: Sydney I don't really see 70 hp on a big block as a big ask especially if machine can do with a much lower amount. 50 % would probably be fine for cruising around and bigger power would be short periods. If cooled well it should not be a issue, add some quality internals, a bigger sump volume,a quality light weight motorbike oil cooler , a good flow job and balance, quality carbs- two not one. Add higher comp pistons, springs and retainers, coat the head internals and piston crown for heat, pick a cam to suit your rev range and torque curve. Make a quality header set and it should be able to make the 70 hp and not be a worry for a aircraft. And add a quality (not cheap) ignition system that makes lovely fat sparks. 70hp a litre max is not real big, in fact no more than a stock Moto Guzzi made. At 60 % its making 42hp at 70 % 49hp. So the 49hp with a quality build and parts should be fine as long as the heat is managed. That is more than a big block is max power approx but I expect it would actually spend time doing a lot less in a aircraft with Duncan. If 50hp was Ok, then thats say 35hp cruise, so a hot one will hardly be working its guts out. A well modded one should be smoother, use a bit less fuel for a given power, run cooler and be much more reliable. Would you scream around at max power? not likely. At reasonable loads and revs when replacing a normal but heavier version with a light , smooth and efficient version with more usable power- it should be a good thing. The only downside is labour and parts costs to do it right, but still that is not excessive. If it cant cruise at 40 hp all day, I would be very surprised. 2. Jan 14, 2020 ### pictsidhe #### Well-Known Member Joined: Jul 15, 2014 Messages: 7,351 2,118 Location: North Carolina The hot version of the big twin was 50hp. That conversion was done by a competent engineer. i have little doubt that he could have squeezed more power out of it, if he thought it was a good idea. I have yet to hear of a reliable aircraft conversion running around double the stock power. There are, however, plenty of people who haven't tried it who are saying that it shouldn't be a problem... 3. Jan 16, 2020 ### litespeed #### Well-Known Member Joined: May 21, 2008 Messages: 1,550 320 Location: Sydney Touche.... 4. Jan 22, 2020 ### D_limiter #### Member Joined: Jul 13, 2018 Messages: 11 6 Location: Norcal The DA-11 is a pretty little plane, emphasis on little. Useful load is only 200lb, which is not useful for my 200lb plus carcass. I’m fascinated by the v-twin conversions, because they are mass market engines with modern engineering at a reasonable price. I would love to see a working conversion on a aircraft. Are there any, or is it still premature? 5. Jan 22, 2020 ### Jay Dub #### Member Joined: Dec 17, 2019 Messages: 5 7 Location: SE WY Kevin Armstrong in the UK put about 160 hours on a Japanese made Briggs vanguard 627.He didn't use a reinforced cam and it broke. He put in a specially made cam (and recommends a reinforced one after his forced landing), changed rods and pistons, HD valve springs, SS valves, etc. He wasn't well received here on hba but I've contacted him and he has a lot of development information. He has many YouTube videos under his name. He is now using a liquid cooled Chinese atv engine and that is working well. I respect him highly and can't understand why he wasn't well received here. A guy named Kleber in Brazil made a few 627s using info from Kevin but he doesn't change much of the internals other than HD valve springs, dual mikuni carbs, and a tuned exhaust. Both Kevin and Kleber estimate 35 to 39 HP with their respective mods. They say they burn around 1.2 to 1.5 gph cruising around. Parazoom also uses the 627 on their machines. It's being done. It seems many here will tell you why it can't be done reliably but people like Kevin, Kleber, and Parazoom are doing it and sharing weak areas to be aware of. It's called experimental aviation for a reason. If you want certified parts, then fly certified airplanes. If you're willing to experiment with the possibility of a failure, then experimental aviation might be for you. blane.c, litespeed and Hephaestus like this. 6. Jan 22, 2020 ### litespeed #### Well-Known Member Joined: May 21, 2008 Messages: 1,550 320 Location: Sydney despite the naysayers above, who happily condemn any suggestions of the ability to reliably use a modded version. Well a 627 cc 23hp engine with slight mods for reliability is at 39 hp making Guess how much extra? 70%, just like I said a big block one, could do with even better mods for flow, cooling , stronger and better balanced parts can be done. It is not rocket science, but simple mechanical engineering. And a big block is not likely to be running around at anything like more than 65=70 % load for anything than a few minutes. The facts in the air tend to indicate it is not a huge problem. 7. Jan 22, 2020 ### lr27 #### Well-Known Member Joined: Nov 3, 2007 Messages: 3,577 541 I think the Columban Lucille usually uses a converted Briggs, and I've seen ads from some company in Europe that sells them already converted. The engine in the Da-9, as I recall, has only 21 up. V-twins get a lot bigger than that. 8. Jan 22, 2020 ### Vigilant1 Joined: Jan 24, 2011 Messages: 4,412 2,070 Location: US Yes, these engines are in common use, and getting more common. The search function here at HBA will produce many examples of folks discussing these engines, doing the work to convert them, and some accounts of folks who have flown them. The most common planes that use these engines in direct drive mode are the MC30 Luciole and the Minisport SD-1. Slower draggier aircraft (ultralights, trikes, etc) typically use these engines with a PSRU. HBA member TiPi is documenting the rationale behind his engine choice for his SD-1 and the steps he's taking to convert the B&S 810 to run heads down. More here: https://www.homebuiltairplanes.com/forums/threads/b-s-49-series-810cm3-49ci-tipis-conversion-for-aircraft-use.32368/ And here's a thread with questions/answers about his project: https://www.homebuiltairplanes.com/...49ci-for-aircraft-use-tipis-q-a-thread.32382/ Here's a post with a slideshow of the SD-1 conversion of a B&S 810cc engine About the 22 HP Predator engine from Harbor Freight in the US:https://www.homebuiltairplanes.com/...reight-engine-evaluation-other-v-tiwns.21130/ Kevin posted here under username "Factory-Fit," a search for his posts will be useful to those interested in using these engines with a PSRU. My post here gave some info on his Vanguard 627cc conversion and a link to a video he made. He is a big fan of Ace Redrives, that is for sure. He posted later in the same thread and I think a lot of good information was shared. Others can judge if he was "well received." Too often when folks quote HP numbers they don't say whether the engine can make that power continuously. Of course these engines can make 1 HP/cu inch and more, the racing mowers do that. Make the mechanical mods and run them at high RPM--not a mystery at all. But at that HP, they can't shed the heat as fast as it is being made, and so the heads get hot, and bad things happen soon thereafter (just like any air cooled engine). Valley Engineering sold their Big Block with HPs up to 50, but they were honest in saying the continuous HP they would make was 32. Maybe some more careful attention to airflow/baffling would allow a few more HP from a big block, but anyone claiming they can make 50 HP for 20 minutes straight, and have the engine last for a few hundred hours---well... Last edited: Jan 22, 2020 blane.c likes this. 9. Jan 22, 2020 ### lr27 #### Well-Known Member Joined: Nov 3, 2007 Messages: 3,577 541 I think I read someplace that a guy had a way to extend cooling fins. Some kind of aluminum brazing? I'm skeptical, but it would be nice. 10. Jan 23, 2020 ### Vigilant1 Joined: Jan 24, 2011 Messages: 4,412 2,070 Location: US Bob Hoover (no, not THAT Bob Hoover) used a newly available (at the time) welding rod to make his "fat fin" VW modification. He claimed that it did improve the ability of the heads to shed heat. It might be useful for these industrial engine heads, the stock fin spacing is quite wide (probably to allow debris to pass through without clogging them, at least not right away). But none of this is going to be magic. Lycoming, Continental, VW, stock B&S engines--all of these air cooled NA 4 stroke engines have remarkably similar cc/continuous HP ratios despite a lot of investment and a lot of commercial incentive to up the HP if it could be done simply and reliably. Last edited: Jan 23, 2020 11. Jan 23, 2020 ### lr27 #### Well-Known Member Joined: Nov 3, 2007 Messages: 3,577 541 The Briggs in its native habitat is fan cooled, so maybe it doesn't need the fins. OTOH, that habitat,is dusty. I thought of the right Bob Hoover right away, and then you confused me. 12. Jan 23, 2020 ### pictsidhe #### Well-Known Member Joined: Jul 15, 2014 Messages: 7,351 2,118 Location: North Carolina Bob hoover played with welding additional fins on VWs. If you want to achieve VW power, you will need VW size fins. 13. Jan 23, 2020 ### Jay Dub #### Member Joined: Dec 17, 2019 Messages: 5 7 Location: SE WY According to Kevin Armstrong, the vanguard 627 using the forced air cooling never had a cooling issue. His machine used high comp pistons, different valve train, larger carb, etc and still his cht temps stayed good. Free air cooling could possibly be different though. You must keep in mind, these engines are made to run @ 3600 rpms (some up to 4000 rpms) in stationary environments with the full load of a pump or generator, with dirt and grass clippings stuck in the fins and still not overheat. That's what they are designed to do day after day. Automotive engines are not. litespeed likes this. 14. Jan 23, 2020 ### blane.c #### Well-Known MemberHBA Supporter Joined: Jun 27, 2015 Messages: 3,807 678 Location: capital district NY Last I checked you could get the 830cc engines with EFI for around $1700 each and some carbureted versions are just under$1000 . The less expensive carbureted versions of this engine are being modified in for example the SD-1 ( https://www.sdplanes.com/ ) and some people in this forum (HBA) are either documenting a conversion or planning/dreaming of using one or more of them in a experimental design. Depending on who's doing the work or the planning/dreaming you can expect 30hp - 35hp @ 3600RPM and weights direct drive north and south of 40 kilo's. Whatever version you buy (3) three of them would cost in the neighborhood of $3000 -$5100 initial outlay for 90hp - 105hp. See FAR's 61.31 (I) EXCEPTIONS (2) (B) ( https://www.law.cornell.edu/cfr/text/14/61.31 ). These could be used for experimentation reasonable safely in multiples (in case one quits) and would provide abundant power for a single place or two place design. Multiples would also provide redundant sources of electrical power for Nav/Coms and other aviation related devices. While further monies would be required to modify the engines it is likely from the sources I have been following that the cost to a experimenter would be less than double the cost of the initial outlay so 90hp - 105hp @ +or- 120 kilo's for less than 10k. 15. Jan 23, 2020 ### blane.c #### Well-Known MemberHBA Supporter Joined: Jun 27, 2015 Messages: 3,807 678 Location: capital district NY One of the easiest ways conceptionally of having a three engine plane is to use a canard type (like a any of the Vari-xxxxs for example) modify the design so each canard can carry a engine appropriately outboard of the fuselage and the third engine in the normal place. 16. Jan 23, 2020 ### Vigilant1 Joined: Jan 24, 2011 Messages: 4,412 2,070 Location: US That's true. But their stock fans produce quite a bit of both air volume and (importantly) they can produce pressure at the front of the baffling plenum that is greater than what we can get from dynamic air pressure at normal cruise airspeeds (e.g. 100 MPH). That's >not< to say that the engines won't cool adequately in aircraft (they do seem to cool satisfactorily up to about 27 cc/continuous HP, at approx 100 MPH), but careful attention to baffle design is important. 17. Jan 24, 2020 Joined: Sep 23, 2009 Messages: 48
2020-02-26 14:24:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3016990125179291, "perplexity": 4558.922099617207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146342.41/warc/CC-MAIN-20200226115522-20200226145522-00281.warc.gz"}
https://mathhelpboards.com/threads/finding-maximum-value.4428/
# Finding Maximum Value #### anemone ##### MHB POTW Director Staff member Find the maximum of the expression $$\displaystyle x^4y+x^3y+x^2y+xy+xy^2+xy^3+xy^4$$ if $$\displaystyle x,\;y$$ are real numbers with $$\displaystyle x+y=2$$. #### Opalg ##### MHB Oldtimer Staff member Find the maximum of the expression $$\displaystyle x^4y+x^3y+x^2y+xy+xy^2+xy^3+xy^4$$ if $$\displaystyle x,\;y$$ are real numbers with $$\displaystyle x+y=2$$. This may not be the quickest solution, but it avoids calculus. Let $x=1+t$, then $y=1-t$. Notice that $1+x+x^2+x^3 = \dfrac{x^4-1}{x-1} = \dfrac{(1+t)^4-1}{t}$, and similarly $1+y+y^2+y^3 = -\dfrac{(1-t)^4-1}{t}.$ Also $xy = 1-t^2.$ Then \begin{aligned}x^4y+x^3y+x^2y+xy+xy^2+xy^3+xy^4 &= xy\bigl((1+x+x^2+x^3) + (1+y+y^2+y^3) - 1\bigr) \\ &= (1-t^2)\Bigl(\frac{(1+t)^4-1}{t} - \frac{(1-t)^4-1}{t} - 1\Bigr) \\ &= (1-t^2)(7+8t^2) \\ &= 7+t^2 -8t^4 \\ &= \frac{225}{32} - 8\Bigl(t^2 - \frac1{16}\Bigr)^2\quad \text{(completing the square).}\end{aligned} Thus the maximum value is $\frac{225}{32}$, which occurs when $t = \pm\frac14$, or when $x = \frac34$ or $\frac54.$ Last edited by a moderator: #### MarkFL Staff member Here's a method involving the calculus: If we use the constraint to get $$\displaystyle y=2-x$$ and substitute this into the objective function, we find, after simplification that: $$\displaystyle f(x)=-8x^4+32x^3-47x^2+30x$$ Equating the derivative to zero: $$\displaystyle f'(x)=-32x^3+96x^2-94x+30=0$$ Dividing through by 2, we have: $$\displaystyle -16x^3+48x^2-47x+15=0$$ Multiplying through by -1 and factoring, we have: $$\displaystyle (x-1)(4x-5)(4x-3)=0$$ Use of the first derivative test shows that relative maxima occur at: $$\displaystyle x=\frac{3}{4},\,\frac{5}{4}$$ and we find: $$\displaystyle f_{\text{max}}=f\left(\frac{3}{4} \right)=f\left(\frac{5}{4} \right)=\frac{225}{32}$$ #### anemone ##### MHB POTW Director Staff member Thanks to both of you for participating and also the awesome method on how to solve this problem too! My solution: $$\displaystyle x^4y+x^3y+x^2y+xy+xy^2+xy^3+xy^4=xy(x^3+x^2+x+1+y+y^2+y^3)$$ $$\displaystyle \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;=xy((x^3+y^3)+(x^2+y^2)+(x+y)+1)$$ $$\displaystyle \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;=xy((x+y)^3-3xy(x+y)+(x+y)^2-2xy+(x+y)+1)$$ $$\displaystyle \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;=xy((2)^3-3xy(2)+(2)^2-2xy+(2)+1)$$ $$\displaystyle \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;=xy(15-8xy)$$ $$\displaystyle \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;=-8(xy-\frac{15}{16})^2+\frac{225}{32}$$ Hence, the maximum value is $$\displaystyle \frac{225}{32}$$.
2021-07-27 20:24:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.000009536743164, "perplexity": 1572.013554499617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153491.18/warc/CC-MAIN-20210727202227-20210727232227-00655.warc.gz"}
https://www.miryanagrigorova.com/copy-of-seminar-in-probability-fina
## 2019/2020 Prof. Teemu PENNANEN  (King's College London) 10th of October 2:00 pm LT 3 Roger Stevens Building Title Convex duality in optimal investment and contingent claim valuation in illiquid markets Abstract We develop a duality theory for optimal investment and contingent claim valuation in markets where traded assets may be subject to nonlinear trading costs and portfolio constraints. Under fairly general conditions, the dual expressions decompose into three terms, corresponding to the agent's risk preferences, trading costs and portfolio constraints, respectively. The dual representations are shown to be valid when the market model satisfies an appropriate generalization of the no-arbitrage condition and the agent's utility function satisfies an appropriate generalization of asymptotic elasticity conditions. When applied to classical liquid market models or models with bid-ask spreads, we recover well-known pricing formulas in terms of martingale measures and consistent price systems. Building on the general theory of convex stochastic optimization, we also obtain optimality conditions in terms of an extended notion of a "shadow price". The results are illustrated by establishing the existence of solutions and optimality conditions for some nonlinear market models recently proposed in the literature. Our results allow for significant extensions including nondifferentiable trading costs which arise e.g. in modern limit order markets where the marginal price curve is necessarily discontinuous. Dr. Jing YAO  (Heriot Watt University) 24th of October 2:00 pm LT3 Roger Stevens Building Title Downside Risk Optimization with Random Targets and Portfolio Amplitude Abstract In this paper, we rationalize using downside risk optimization subject to a random target in portfolio selection.  In context of normality, we derive analytical solutions to the downside risk optimization with respect to random targets and investigate how the random target affects  the  optimal  solutions.   In  doing  so,  we  propose  using  portfolio amplitude, as a new measure in literature, to characterize the investment strategy.  Particularly, we demonstrate the mechanism by which the  random  target  inputs  its  impact  into  the  system  and  alters  the optimal portfolio selection.  Our results underpin why investors prefer holding some specific assets in following random targets and provide explanations for some special investment strategies, such as constructing a stock portfolio following a bond index.  Numerical examples are presented to clarify our theoretical results. Dr. Kathrin Glau  (Queen Mary University London) 7th of November 2:00pm LT3 Roger Stevens Building Title Low-Rank Tensor Approximation for Parametric Option Pricing Abstract Computationally intensive problems in finance are characterized by their intrinsic high-dimensionality which often is paired with optimizations leading to nonlinearities. While classical numerical methods typically suffer from a curse in dimensionality, machine learning approaches promise to yield fairly accurate results with a method that is scalable in the dimensions. Computational intense training phases and the required large set of training data pose some of the major challenges for the development of new and adequate numerical methods for finance. Merging classical numerical techniques with learning methods we propose a new approach to option pricing in parametric models. The work is based on [1] and ongoing research with Paolo Colusso and Francesco Statti. [1] Glau, K.; Kressner, D.; Statti, F.: Low-rank tensor approximation for Chebyshev interpolation in parametric option pricing.  preprint 2019 Dr. Gonçalo dos Reis  (University of Edinburgh) 14th of November 2:00 pm LT 3 Roger Stevens Building Title Itô-Wentzell for measure dependent random fields under full and conditional measure flows Abstract We show several Itô-Wentzell formulae on Wiener spaces for real-valued maps depending on measures. We present both the full measure-flow and the marginal-flow cases where the measure flow derivatives are understood in the sense of Lions. This talk has been cancelled. Dr. Renyuan XU  (University of Oxford) 21st of November 2:00 pm LT3 Roger Stevens Building Title A Case Study on Pareto Optimality for Collaborative Stochastic Games Abstract Pareto Optimality (PO) is an important concept in game theory to measure global efficiency when players collaborate. In this talk, we start with the PO for a class of continuous-time stochastic games when the number of players is finite. The derivation of PO strategies is based on the formulation and analysis of an auxiliary N-dimensional central controller’s stochastic control problem, including its regularity property of the value function and the existence of the solution to the associated Skorokhod problem. This PO strategy is then compared with the set of (non-unique) NEs strategies under the notion of Price of Anarchy (PoA). The upper bond of PoA is derived explicitly in terms of model parameters. Finally, we characterize analytically the precise difference between the PO and the associated McKean-Vlasov control problem with an infinite number of players, in terms of the covariance structure between the optimally controlled dynamics of players and characteristics of the no-action region for the game. This is based on joint work with Xin Guo (UC Berkeley). Dr.  Máté GERENCSER  (IST Austria) 22nd of November 2:00 pm LT3 Roger Stevens Building Title Boundary renormalisation of stochastic PDEs Abstract We discuss solution theories of singular SPDEs endowed with various boundary conditions. In several examples  nontrivial boundary effects arise and another layer of renormalisation is required. We outline how these are connected to spatial singularities of simple trees in equations like the KPZ, PAM, of Phi^4. Dr. Luitgard VERAART (London School of Economics) 5th of December 2pm LT3 Roger Stevens Building Title When does portfolio compression reduce systemic risk? Abstract We analyse the consequences of portfolio compression on systemic risk. Portfolio compression is a post-trading netting mechanism that reduces gross positions while keeping net positions unchanged and it is part of the financial legislation in the US (Dodd-Frank Act) and in Europe (European Market Infrastructure Regulation).  We derive necessary structural conditions for portfolio compression to be harmful and discuss policy implications. In particular, we show that the potential danger of portfolio compression comes from defaults of firms that conduct portfolio compression. If no such defaults occur, then portfolio compression reduces systemic risk. Prof. Dr. David PRÖMEL (Mannheim Universität) 12th of December 2:00 pm LT 3 Roger Stevens Building Title Martingale Optimal Transport in Robust Finance Abstract Without assuming any probabilistic price dynamics, we consider a frictionless financial market given by the Skorokhod space, on which some financial options are liquidly traded. In this model-free setting we show various pricing-hedging dualities and the analogue of the fundamental theorem of asset pricing. For this purpose we study the corresponding martingale optimal transport (MOT) problem: We obtain a dual representation of the Kantorovich functional (super-replication functional) defined for functions (financial derivatives) on the Skorokhod space using quotient sets (hedging sets). Our representation takes the form of a Choquet capacity generated by martingale measures satisfying additional constraints to ensure compatibility with the quotient sets. The talk is based on a joint work with Patrick Cheridito, Matti Kiiski and H. Mete Soner. Dr. Martin HERDEGEN  (University of Warwick) 13th of February 2:00 pm-3:00pm LT 05 (7.05) Roger Stevens DOUBLE SEMINAR Title A Dual Characterisation of Regulatory Arbitrage for Coherent Risk Measures Abstract We revisit portfolio selection in a one-period financial market under a coherent risk measure constraint, the most prominent example being Expected Shortfall (ES). Unlike in the case of classical mean-variance portfolio selection, it can happen that no efficient portfolios exist. We call this situation regulatory arbitrage. We then show that the absence of regulatory arbitrage is intimately linked to the interplay between the set of equivalent martingale measures (EMMs) for the discounted risky assets and the set of absolutely continuous measures in the dual characterisation of the risk measure. In the special case of ES, our result shows that the market does not admit regulatory arbitrage for ES at confidence level $\alpha$ if and only if there exists an EMM $Q \approx P$ such that $\Vert \frac{dQ}{dP} \Vert_\infty < \frac{1}{\alpha}$. The talk is based on joint work with my PhD student Nazem Khan. Prof. Johannes  MUHLE-KARBE  (Imperial College London) 13th of February 3:10 pm-4:10pm LT 05 (7.05) Roger Stevens DOUBLE SEMINAR Title Equilibrium asset pricing with frictions Abstract We study how the prices of assets depend on their “liquidity”, that is, the ease with which they can be traded. We show that equilibrium prices and the corresponding optimal trading strategies can be characterised by systems of coupled forward-backward SDEs. We outline some first wellposedness results and discuss explicit formulas that arise in the limit of large liquidity. The talk is based on joints works with Agostino Capponi, Lukas Gonon, Martin Herdegen, Dylan Possamai, and Xiaofei Shi. Prof. Saul JACKA  (University of Warwick) 27th of February 2:00 pm LT 5 Roger Stevens Building Title Minimising the shuttle/commute time Abstract We consider the problem of choosing a diffusion's drift so as to minimise the expected time to commute from zero to 1 and back again. We solve this first as a static problem and then (in  a suitable sense) dynamically. Prof. Zhenya LIU (University Aix-Marseille and Renmin University of China) 5th of March 2:00 pm LT 6.142 Worsley Building Title The Optimal Equity Selling Price with Endogenous Drawdown Abstract In this paper, we propose the endogenous drawdown in an investor's reward function. The endogenous drawdown is the difference between the underlying process and its maximum related process. We find the optimal selling price is a function of the historical highest price, the weights of profit and loss in investor's reward function, and the characters of the underlying stochastic process. With the data of the S&P 500 Index and SSE Composite Index, we calculate the numerical solution of the optimal selling price with consideration of endogenous drawdown. Through the out-sample test, this optimal selling price performs well to sell before the price drops sharply. Dr. Ankush AGARWAL  (University of Glasgow) 12th of March 2:00 pm LT 5 Roger Stevens Building Title A Fourier-based Picard-iteration approach for a class of McKean-Vlasov SDEs with Lévy jumps Abstract We consider a prototype class of Lévy-driven stochastic  differential equations (SDEs) with McKean-Vlasov (MK-V) interaction in the drift coefficient. It is assumed that the drift coefficient is affine in the state variable, and only measurable in the law of the solution. We study the equivalent functional fixed-point equation for the unknown time-dependent coefficients of the associated linear Markovian SDE. By proving a contraction property for the functional map in a suitable normed space, we infer existence and uniqueness results for the MK-V SDE, and derive a discretized Picard iteration scheme that approximates the law of the solution through its characteristic function. Numerical illustrations show the effectiveness of our method, which appears to be appropriate to handle the multi-dimensional setting. This is a joint work with Stefano Pagliarani. Prof. Tusheng ZHANG  (University of Manchester) 12th of March 3:10 pm LT 5 Roger Stevens Building Title Talagrand Concentration Inequalities for Stochastic Heat-Type Equations  under Uniform Distance Abstract In this paper, we established a  quadratic transportation cost inequality under the uniform/maximum norm for solutions of stochastic heat equations driven by multiplicative space-time white noise. The proof is  based on a new inequality we obtained for the moments of the stochastic convolution with respect to space-time white noise, which is of independent interest. The solutions of such stochastic partial differential equations are typically not semimartingales on the state space. The talk by Tusheng Zhang has been CANCELLED. Prof. Peter BANK  (Technische Universität-Berlin) 19th of March 3:10 pm-4:10pm LT 5 Roger Stevens Building Title TBA Abstract TBA This seminar session has been postponed due to the Covid-19 crisis. It will take place on the 25th of  March 2021.
2022-05-25 13:40:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4757857024669647, "perplexity": 2547.6739820688645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662587158.57/warc/CC-MAIN-20220525120449-20220525150449-00644.warc.gz"}
http://math.stackexchange.com/questions/220266/an-estimation-using-the-functions-in-schwarz-class
# An estimation using the functions in Schwarz class. Define $$U := C^0 ([0,T], W^{1,2} ) \cap C^1 ([0,T] \cap L^2 ) \cap L^\infty ([0,T] , W^{s,2} ).$$ Then how can I prove that $\lim_{x_j \to \infty}| u |^2 = 0$ by using the fact: $$C^1 ([0,T] , \mathcal S ) \text{ is dense in } U?$$ Here $u : [0,T] \times \Bbb R^n \to \Bbb C^n$ , the norm for the space $U$ : $|u|_{s,T} := \sup _{t \in [0,T]} \| u(t) \|_{W^{s,2}}$, $W^{s,p}$ is the usual Sobolev space, $\mathcal S$ denotes the Schwarz class. - @DavideGiraudo $|u|$ means that $\sqrt{|u_1|^2 + \cdots + |u_n|^2}$ for a fixed $t \in [0,T]$. – Ann Oct 29 '12 at 14:37 @DavideGiraudo Sorry, $\Bbb C$ should be fixed to $\Bbb C^n$. – Ann Oct 29 '12 at 14:39 and $u \in U$ means that all the components $u_j \in U$. – Ann Oct 29 '12 at 14:41 @DavideGiraudo $p$ is always 2, but $s$ can be $0,1,2,\cdots$. – Ann Oct 29 '12 at 14:48 Do you see why you just have to show it when $u_j$ is in the Schwartz space? – Davide Giraudo Oct 29 '12 at 14:53
2016-06-26 08:19:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.965698778629303, "perplexity": 257.08128638362194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00193-ip-10-164-35-72.ec2.internal.warc.gz"}
http://physicistfarmer.blogspot.com/2010/04/problems-in-paradise.html
## Sunday, April 11, 2010 ### Problems in paradise. Hello everyone Once again I have been too busy to write for a few days, mostly working on my electromagnetism homework that was due on Friday, for which I stayed up till 2 in the morning to finish. And then, fittingly enough as we approach the end of the semester, my new computer's wireless module decided to conk out on me yesterday. It appears to be a bona fide hardware problem, so it looks like I'm stuck till I can find someone to fix it. This morning when I turned on my old computer I thought for a while that it had suddenly stopped working as well, and was starting to wonder if I had inadvertently grown an anti-Internet aura, but thankfully a simple restart fixed the problem, else you would not be reading this post right now. So… Expect fewer posts from me for a time, due both to my busyness and this. Thankfully, my new computer continues to function fine in other respects, so I can at least use it for non-Internet related school things, like working on my project for PDE's. And the take-home test for that class, and the new take-home test for electromagnetism, and…yeah. I suppose, if I can live through the end of the semester I can make it through anything, but that's a pretty big 'if' with the seemingly ever-accelerating pace of things. My consolation for doing all this homework is that I can typeset it with $$\LaTeX$$. It makes doing the work almost fun, with the expectation of seeing beautifully typeset math come out of what you do. When I have time and opportunity, I'll attach some pictures of the gorgeous results of $$\LaTeX$$ typesetting from my homework, so you can see what it's like. Finally, I'll be heading up to Mauna Kea tonight with some other UAC members since it's the second Saturday of the month. It's close to new moon, so if the weather will cooperate, I hope to get some great astrophotos that maybe I'll have time to process and show you all some day. #### Post a Comment Think I said something interesting or insightful? Let me know what you thought! Or even just drop in and say "hi" once in a while - I always enjoy reading comments.
2017-12-12 00:26:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.562411367893219, "perplexity": 1092.2049140160225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948514238.17/warc/CC-MAIN-20171212002021-20171212022021-00608.warc.gz"}
https://reviews.llvm.org/p/poelmanc/
# Projects User does not belong to any projects. # User Details User Since Sep 11 2019, 11:23 AM (146 w, 6 d) # Feb 4 2022 Sorry I somehow missed the acceptance back on 10 Jan. I lack commit access, it would be great if you could push for me, thanks! Feb 4 2022, 10:55 PM · Restricted Project, Restricted Project, Restricted Project, Restricted Project # Mar 1 2021 poelmanc added a comment to D97620: Fix DecisionForestBenchmark.cpp compile errors. Fixed comment as requested (keys()->size().) I don't have commit access so if you can push it that would be great, thanks! Mar 1 2021, 10:22 AM · Restricted Project, Restricted Project poelmanc updated the diff for D97620: Fix DecisionForestBenchmark.cpp compile errors. Fix comment. Mar 1 2021, 10:21 AM · Restricted Project, Restricted Project # Feb 28 2021 Feb 28 2021, 12:55 PM · Restricted Project, Restricted Project Removed a function in modernize/LoopConvertCheck.cpp that's no longer needed due to this patch. Fixed clang-format and clang-tidy issues identified by HarborMaster in prior submission. Feb 28 2021, 2:06 AM · Restricted Project, Restricted Project Feb 28 2021, 12:41 AM · Restricted Project, Restricted Project, Restricted Project, Restricted Project # Feb 27 2021 poelmanc updated the diff for D97620: Fix DecisionForestBenchmark.cpp compile errors. Feb 27 2021, 5:02 PM · Restricted Project, Restricted Project poelmanc requested review of D97620: Fix DecisionForestBenchmark.cpp compile errors. Feb 27 2021, 4:55 PM · Restricted Project, Restricted Project # Feb 26 2021 Does this need to be an option? It's easy to add an option, but there are already two main include-related options, so before adding a third I wanted to give this some thought. As someone new to IncludeCategories, I was genuinely impressed at how easy it was to use and how it gave me such complete control over the grouping via regular expressions. Yet in comparison the determination of main headers was less clear and more hard-coded; I had to examine the source code to figure out that the comparison is case-insensitive, it doesn't consider <> includes, only file stems are considered (e.g. the foo/bar in foo/bar/baz.h is ignored), and the behaviors of IncludeIsMainSourceRegex and IncludeIsMainRegex were a bit murky. Feb 26 2021, 12:46 PM · Restricted Project, Restricted Project, Restricted Project # Feb 16 2021 Thanks for the speedy reply and the great tool. With appropriate Regex and Priority settings, IncludeCategories worked flawlessly for me aside from not identifying the main header. Treating #include "foo/bar/baz.h" as the main header in file bar/random/baz.h seems like a bug, but I certainly see the dangers of changing current <> behavior. I also considered treating <> includes as main headers only if they also contain a forward-slash, e.g.: ```if (!IncludeName.startswith("\"") && !IncludeName.contains("/")) return false;``` That would resolve the <string.h> case, although #include <sys/types.h> in a file anything/sys/types.h would be identified as the main header. So making an option seems like the cleanest solution. Say, bool IncludeIsMainAllowBraces? Feb 16 2021, 1:53 AM · Restricted Project, Restricted Project, Restricted Project # Feb 15 2021 Feb 15 2021, 7:41 PM · Restricted Project, Restricted Project, Restricted Project # Feb 8 2021 Capitalize IsOrHasDescendant, add } // namespace std per HarborMaster feedback. Feb 8 2021, 11:55 PM · Restricted Project, Restricted Project It looks like premerge tests skipped your last diff with id 321451 (https://reviews.llvm.org/harbormaster/?after=87950). You can re-trigger by uploading a new diff, in the meantime i would also file a bug to https://github.com/google/llvm-premerge-checks/issues. mentioning your diff id. Thanks @kadircet, actually just reviewing the current bug list explained the problem: https://github.com/google/llvm-premerge-checks/issues/263, Build is not triggered if diff repository is not set. I wasn't selecting rG LLVM Github Monorepo per instructions at https://llvm.org/docs/Phabricator.html that said Leave the Repository field blank. Feb 8 2021, 9:04 PM · Restricted Project, Restricted Project Call llvm::Annotation constructor rather than operator= to fix linux build issue, fix some issues identified by clang-format and clang-tidy. Feb 8 2021, 6:44 PM · Restricted Project, Restricted Project Thanks for your patience, this one should work, as I used my normal git show HEAD -U999999 workflow. Feb 8 2021, 5:49 PM · Restricted Project, Restricted Project LGTM, Same as last time for the commit? Feb 8 2021, 8:32 AM · Restricted Project, Restricted Project Comment change, "beginning" to "start" for consistency, being sure to set Repository on "diff" page (not just on edit page) to see if https://github.com/google/llvm-premerge-checks/issues/263 was the problem. Feb 8 2021, 7:46 AM · Restricted Project, Restricted Project Feb 8 2021, 7:43 AM · Restricted Project, Restricted Project # Feb 7 2021 Update one one comment, hopefully trigger HarborMaster to rerun. Feb 7 2021, 10:51 PM · Restricted Project, Restricted Project Should this be committed using poelmanc <cpllvm@stellarscience.com>? That or Conrad Poelman <cpllvm@stellarscience.com> would be great, thanks. I committed using the email provided but its not attributed you on github. It's attributed you here though. is that email not linked to your github account? Thanks for the commit, and thanks for the heads-up! I've now added the address in my github account. Feb 7 2021, 12:05 PM · Restricted Project, Restricted Project # Feb 6 2021 On 2 Feb Harbormaster found a bug from my switching some char* code to use StringRef. I uploaded a new patch on 4 Feb, but Harbormaster does not appear to have rerun. What triggers Harbormaster - do I need to take some action? I haven't been able to find such options myself. Feb 6 2021, 6:30 PM · Restricted Project, Restricted Project Should this be committed using poelmanc <cpllvm@stellarscience.com>? That or Conrad Poelman <cpllvm@stellarscience.com> would be great, thanks. Feb 6 2021, 6:04 PM · Restricted Project, Restricted Project # Feb 4 2021 Change loop end condition in findLineEnd and add several assert statements. Feb 4 2021, 8:36 AM · Restricted Project, Restricted Project # Feb 3 2021 Can I ask if you could tidy the description of this, basically remove all the stuff about hasGrandparent etc, probably best just remove everything after result = (a1 nullptr a2); in the desc. It shows in the commit message and its not strictly relevant. Thanks, done. I never thought about all that showing up in the commit message, I'll be more concise. Feb 3 2021, 4:45 PM · Restricted Project, Restricted Project Feb 3 2021, 4:43 PM · Restricted Project, Restricted Project Thanks for all the great feedback I received here. To give credit where credit's due, this updated revision to UseNullptrCheck.cpp is now actually 100% @steveire's suggested code. Even one of the tests cases was his. Whenever it's ready to land I'd appreciate it if someone could push it as I lack llvm-project commit access. Feb 3 2021, 3:57 PM · Restricted Project, Restricted Project @njames93 Thanks for the review and for accepting this revision. I lack llvm-project commit access so if it's good to go I would greatly appreciate it if you or someone could push this whenever you have have a chance. Thanks! Feb 3 2021, 3:45 PM · Restricted Project, Restricted Project # Feb 2 2021 Fix formatting, add suggested test case (which works.) Feb 2 2021, 12:34 AM · Restricted Project, Restricted Project # Feb 1 2021 Feb 1 2021, 11:10 PM · Restricted Project, Restricted Project Glad to be back after a year away from clang-tidy, and sorry to have let this patch linger. Running clang-tidy over a large codebase shows this patch still to be needed. I believe I've addressed all identified issues but welcome feedback. Feb 1 2021, 10:41 PM · Restricted Project, Restricted Project Add period to end of comment. Feb 1 2021, 5:13 PM · Restricted Project, Restricted Project s/Guard/Lock/! I don't have commit access so appreciate a push. Feb 1 2021, 2:03 PM · Restricted Project, Restricted Project Change Guard to Lock. Feb 1 2021, 2:02 PM · Restricted Project, Restricted Project # Jan 31 2021 Jan 31 2021, 10:24 PM · Restricted Project, Restricted Project Thanks @steveire, that suggestion worked perfectly! I added the additional test case and shortened the mimicked strong_ordering code to a version from clang/unittests.ASTMatchers/ASTMatchersTraversalTest.cpp, but also manually tested this using both MSVC's and libstdc++v3's <compare> header. Jan 31 2021, 9:52 PM · Restricted Project, Restricted Project This does highlight an issue where the mimicked std library stubs used for tests don't correspond exactly to what the stdlib actually looks like and can result in subtly different ASTs that have added/removed implicit nodes. Going a little off point here but a few months back I pushed a fix for another check that passed its tests. However the bug report was re-opened as the bug was still observable in the real word. Turned out the implementation of std::string used for the test had a trivial destructor resulting in the AST not needed to emit CXXBindTemporaryExprs all over the place, which threw off the matching logic. Unfortunately this kind of disparity is hard to detect in tests so it may be wise to test this locally using the compare header from a real standard library implementation (preferably all 3 main ones if you have the machines) and see if this behaviour is correct. Jan 31 2021, 1:46 PM · Restricted Project, Restricted Project # Jan 30 2021 @njames93 Thank you so much for the quick feedback, I made your suggested changes and added a test that it properly converts result = (a1 > (ptr == 0 ? a1 : a2)); to result = (a1 > (ptr == nullptr ? a1 : a2)); now. Jan 30 2021, 4:37 PM · Restricted Project, Restricted Project Thanks to the great feedback, changed unless(hasAncestor(cxxRewrittenBinaryOperator())) to unless(hasParent(expr(hasParent(cxxRewrittenBinaryOperator())))) and added a test to verify the improvement (and removed an extraneous comment.) Jan 30 2021, 4:27 PM · Restricted Project, Restricted Project # Jan 29 2021 Fix test failure in modernize-use-nullptr-cxx20.cpp by replacing #include <compare> with some minimal equivalent std code. Jan 29 2021, 11:48 PM · Restricted Project, Restricted Project Jan 29 2021, 8:31 PM · Restricted Project, Restricted Project Jan 29 2021, 7:21 PM · Restricted Project, Restricted Project # Dec 22 2019 Dec 22 2019, 9:00 PM · Restricted Project, Restricted Project ToolingTests/ApplyAtomicChangesTest.FormatsCorrectLineWhenHeaderIsRemoved also seems to be failing, you can run it with ninja ToolingTests && ./tools/clang/unittests/Tooling/ToolingTests --gtest_filter="ApplyAtomicChangesTest.FormatsCorrectLineWhenHeaderIsRemoved" Dec 22 2019, 9:00 PM · Restricted Project, Restricted Project Dec 22 2019, 8:51 PM · Restricted Project, Restricted Project Fix algorithm to fix failing ToolingTests/ApplyAtomicChangesTest.FormatsCorrectLineWhenHeaderIsRemoved. That test revealed a very specific but real failure case: if existing Removals removed all whitespace including the line's ending newline, and the subsequent line contained only a newline, it would delete the subsequent newline too, which was not desired. Added a new test CleanUpReplacementsTest.RemoveLineAndNewlineLineButNotNext to explicitly test this case. Dec 22 2019, 8:32 PM · Restricted Project, Restricted Project # Dec 21 2019 Update patch to rebase on latest: Changed SourceLocation::contains to SourceLocation::fullyContains, and removed new SourceLocation comparison operators since coincidentally @sammccall added them just days ago. Dec 21 2019, 11:29 PM · Restricted Project, Restricted Project # Dec 20 2019 Address most of the feedback, I'll comment individually. Dec 20 2019, 1:58 PM · Restricted Project, Restricted Project Any further feedback or thoughts on this? Dec 20 2019, 1:58 PM · Restricted Project, Restricted Project # Nov 20 2019 also could you rename the revision so that it reflects the fact that, this is a change to clang-format and has nothing to do with clang-tidy ? Done but feel free to suggest a better title. Nov 20 2019, 11:35 PM · Restricted Project, Restricted Project Nov 20 2019, 11:35 PM · Restricted Project, Restricted Project I addressed the latest code review comments, added tests to clang/unittests/Format/CleanupTest.cpp, and updated numerous tests to reflect improved removal of blank lines. Nov 20 2019, 11:25 PM · Restricted Project, Restricted Project # Nov 19 2019 poelmanc updated subscribers of D70144: clang-tidy: modernize-use-equals-default avoid adding redundant semicolons. Just a quick update, I made some progress addressing the architectural limitation of AnnotatedLines and evaluate(). I changed the AnnotatedLines constructor to also take a Line *PrevAnnotatedLine argument and set Line->First->Prev to the Last of the previous line and Line->Last->Next to the First of the next line. So now the Tokens remain connected across lines, so individual TokenAnalyzer subclasses can easily peek ahead and behind the current changed Line if they wish. Places that previously iterated e.g. while(Tok) had to be changed to iterate while(Tok && Tok != Line->Last->Next) - I probably haven't found all of those places yet. Nov 19 2019, 3:18 PM · Restricted Project # Nov 18 2019 Thanks @lebedev.ri for taking the time to think this through and reply. All that makes sense, so I've changed the default to 0. Nov 18 2019, 3:22 PM · Restricted Project, Restricted Project Switch default to 0. Add Release Note with some detail to increase the chances of someone finding this with an Internet search on the error message. Nov 18 2019, 3:16 PM · Restricted Project, Restricted Project # Nov 16 2019 Exactly due to the issue you are fixing here, we ended up disabling the complete check because we didn't want to live with the warnings it produced on -Wextra. Therefore, I'm actually strongly in favor to enable the option by default. When others see that clang-tidy fixits introduce warnings (with -Wextra) or even break their build (with -Werror), they might not look into check options, but just disable the check directly. Just pinging to see if anyone has any thoughts on moving forward with this. Thanks in advance for any feedback! Nov 16 2019, 12:35 AM · Restricted Project, Restricted Project # Nov 15 2019 If there's a way to match only CXXRecordDecls that are immediately followed by a TypedefDecl... Alternatively, within check() when we get the TypedefDecl, is there any way to navigate up the AST to find its immediately preceding sibling node in the AST and check whether it's a CXXRecordDecl? If so we could eliminate Finder->addMatcher(cxxRecordDecl(unless(isImplicit())).bind("struct"), this); altogether. I didn't see a way to do that though. Nov 15 2019, 9:11 PM · Restricted Project, Restricted Project Nov 15 2019, 8:45 PM · Restricted Project, Restricted Project Nov 15 2019, 8:36 PM · Restricted Project, Restricted Project # Nov 14 2019 ! In D70144#1745583, @JonasToth wrote: ! In D70144#1745532, @malcolm.parsons wrote: My initial attempt did not go well. I thought perhaps adding: `cleanupLeft(Line->First, tok::semi, tok::semi);` to clang/lib/Format/Format.cpp:1491 might do the trick. Stepping through that in the debugger shows that cleanupPair iterates over tokens on affected lines over the affected range. But after the newly added default token and subsequent semi token comes a nullptr - I could not see how to peek past the default; to see what else is on the line. Nov 14 2019, 9:08 PM · Restricted Project I'm a bit worried that this manual parsing technique will need fixing again in the future, but I think this is at least a reasonable incremental improvement. Thanks and I agree. Your comments encouraged me to take another stab at improving things. See D70270, a whole new approach that removes manual parsing in favor of AST processing and successfully converts many more cases from typedef to using. Nov 14 2019, 1:29 PM · Restricted Project, Restricted Project Nov 14 2019, 1:20 PM · Restricted Project, Restricted Project While I have no objections against this patch, I wonder whether someone had a chance to ask GCC developers about this? Is it a conscious choice to suggest override when final is present? What's the argument for doing so? Thanks, someone should ask them as I believe this issue extends beyond clang-tidy: code with functions marked final cannot satisfy both gcc -Wsuggest-override and clang -Winconsistent-missing-override; gcc demands override final and clang demands just final. Nov 14 2019, 1:02 PM · Restricted Project Of yes. Totally forgot that. That would probably the most elegant way :) Interesting, so is the advantage of that approach that any fixit Replacement or Insertion that ends with a semicolon would have it removed if a semicolon already immediately follows it? That makes sense - one less thing for individual check developers to worry about. Nov 14 2019, 9:02 AM · Restricted Project # Nov 13 2019 LGTM! Did you check on a real code-base that suffers from the issue, if that works as expected? Thanks! I have now run it on our real code base and it worked as expected. Nov 13 2019, 4:50 PM · Restricted Project Change to add just one helper function findNextTokenSkippingComments to LexerUtils.h, requiring no change to Token::isOneOf, and properly call it from UseEqualsDefaultCheck.cpp. Nov 13 2019, 4:40 PM · Restricted Project Update to add and use new findNextTokenSkippingComments and findNextTokenSkippingKind utility functions. Since the former calls the latter with just one token type, this required generalizing Token::isOneOf() to work with 1-to-N token kinds versus requiring 2-to-N. Nov 13 2019, 12:47 PM · Restricted Project Nov 13 2019, 8:09 AM · Restricted Project # Nov 12 2019 poelmanc updated the summary of D70165: clang-tidy: modernize-use-override new option AllowOverrideAndFinal. Nov 12 2019, 11:29 PM · Restricted Project poelmanc updated the summary of D70165: clang-tidy: modernize-use-override new option AllowOverrideAndFinal. Nov 12 2019, 11:29 PM · Restricted Project Nov 12 2019, 11:20 PM · Restricted Project Nov 12 2019, 2:03 PM · Restricted Project # Nov 11 2019 Nov 11 2019, 7:03 PM · Restricted Project, Restricted Project Move isWhitespace and skipNewlines to clang/Basic/CharInfo.h (renaming to isWhitespaceStringRef and skipNewlinesChars to resolve name clashes) and add double-quotes around "/n" and "/r" in comments. Nov 11 2019, 6:45 PM · Restricted Project, Restricted Project Make requested fixes to documentation. Nov 11 2019, 2:01 PM · Restricted Project, Restricted Project Thanks! Do you have commit access or do you need me to commit for you? Nov 11 2019, 1:52 PM · Restricted Project, Restricted Project Done, thanks. Nov 11 2019, 1:52 PM · Restricted Project, Restricted Project poelmanc updated the diff for D67460: clang-tidy: modernize-use-using work with multi-argument templates. Changed post-increment/decrement to pre-increment/decrement. Nov 11 2019, 1:44 PM · Restricted Project, Restricted Project # Nov 8 2019 I just rebased this patch on the latest master. I believe I've addressed all the comments raised so far. Should I add mention of this change to the release notes? Nov 8 2019, 9:57 PM · Restricted Project, Restricted Project In D69238#1739627, @NoQ wrote: Wow. That makes this so much easier... Thank you so much! Nov 8 2019, 9:50 PM · Restricted Project, Restricted Project poelmanc updated the diff for D69238: Fix clang-tidy readability-redundant-string-init for c++17/c++2a. Nov 8 2019, 9:39 PM · Restricted Project, Restricted Project I like that this check warns about copy constructors that don't copy. The warning can be suppressed with a NOLINT comment if not copying is intentional. Thanks for the comment! It's not clear to me how you suggest proceeding though. Nov 8 2019, 5:54 PM · Restricted Project, Restricted Project If it is indeed the extra AST node for the elidable constructor, see if you can use the ignoringElidableConstructorCall AST matcher to ignore it, therefore smoothing over AST differences between language modes. Nov 8 2019, 5:17 PM · Restricted Project, Restricted Project Now allows namespaces on types and defaults to ::std::basic_string as requested. The code uses namespaced string type names to check types, and uses non-namespaced string type names to check for the required one-argument or two-argument-defaulted constructors. Nov 8 2019, 4:41 PM · Restricted Project, Restricted Project In D69238#1736365, @NoQ wrote: I suspect that it's not just the source range, but the whole AST for the initializer is different, due to C++17 mandatory copy elision in the equals-initializer syntax. Like, before C++17 it was a temporary constructor, a temporary materialization (ironic!), and a copy constructor, but in C++17 and after it's a single direct constructor which looks exactly like the old temporary constructor (except not being a CXXTemporaryObjectExpr). You're most likely talking about different construct-expressions in different language modes. That said, it should probably be possible to change the source range anyway somehow. Thanks to all the encouragement here, I spent a few more hours stepping through code and have found a one-line change to clang\lib\Sema\SemaInit.cpp:8053 that fixes this bug! Change: `SourceLocation Loc = CurInit.get()->getBeginLoc();` to `SourceLocation Loc = Kind.getLocation();` For SK_UserConversion, CurInit is set at SemaInit.cpp:7899 to Args[0], i.e. the first argument to the constructor, which is "" in this case. By changing Loc to Kind.getLocation(), the BuildCXXConstructExpr at SemaInit.cpp:8064 gets created with a SourceRange spanning a = "". With just that change, the SourceRange for an expression like std::string a = "" becomes consistent across C++11/14/17/2a and readability-redundant-string-init tests pass in all C++ modes (so we can throw away my 70 lines of manual parsing code.) Nov 8 2019, 2:15 PM · Restricted Project, Restricted Project Just documenting here that I sent the following email to cfe-dev@lists.llvm.org: Nov 8 2019, 2:06 PM · Restricted Project, Restricted Project Thanks @aaron.ballman, I don't have commit access so will someone else commit this? Nov 8 2019, 10:17 AM · Restricted Project, Restricted Project # Oct 29 2019 Oct 29 2019, 4:16 PM · Restricted Project, Restricted Project @aaron.ballman Thanks for the hasAnyName feedback! From the name internal::VariadicFunction I assumed arguments were needed at compile-time, but thanks to your suggestion I found the overload taking ArrayRef<ArgT>. Oct 29 2019, 4:06 PM · Restricted Project, Restricted Project Added release notes, fixed backticks in documentation, removed blank line, removed new hasListedName matcher and used existing hasAnyName matcher. Oct 29 2019, 3:56 PM · Restricted Project, Restricted Project Oct 29 2019, 2:14 PM · Restricted Project, Restricted Project Oct 29 2019, 5:23 AM · Restricted Project, Restricted Project Changed default to 1, updated help accordingly, and removed an extraneous set of braces around a one-line statement. Oct 29 2019, 12:30 AM · Restricted Project, Restricted Project # Oct 28 2019 Oct 28 2019, 10:29 PM · Restricted Project, Restricted Project Oct 28 2019, 10:27 PM · Restricted Project, Restricted Project Thanks for the feedback, the new patch removes the extra const, clang::, and braces on one-line if statements, and now explains the regex in readability-redundant-string-init.cpp. Oct 28 2019, 10:14 AM · Restricted Project, Restricted Project poelmanc updated the diff for D69238: Fix clang-tidy readability-redundant-string-init for c++17/c++2a. Remove extra const, braces for one-line if statements, and clang::. Added a comment explaining the need for a regex on a warning test line. Oct 28 2019, 10:11 AM · Restricted Project, Restricted Project # Oct 24 2019 What do @malcolm.parsons, @alexfh, @hokein, @aaron.ballman, @lebedev.ri think of @mgehre's suggestion to enable IgnoreBaseInCopyConstructors as the default setting, so gcc users won't experience build errors and think "clang-tidy broke my code!" Oct 24 2019, 11:55 AM · Restricted Project, Restricted Project # Oct 21 2019 Rebase to latest master (tests moved into new "checkers" directory.) Oct 21 2019, 9:01 AM · Restricted Project, Restricted Project Addressed these the other day but failed to check the "Done" boxes. Done! Oct 21 2019, 9:01 AM · Restricted Project, Restricted Project Checked "Done". (I addressed @jonathanmeier's comment feedback with a previous update but forgot to check the box!) Oct 21 2019, 8:51 AM · Restricted Project, Restricted Project
2022-07-06 10:35:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8678067326545715, "perplexity": 8439.911886930808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104669950.91/warc/CC-MAIN-20220706090857-20220706120857-00066.warc.gz"}
http://kancolle.wikia.com/wiki/Expedition/Introduction
# Expedition/Introduction ## < Expedition 1,513pages on this wiki For success conditions and yields, see Expedition/Reference tables. For common questions (be sure to read), see Expedition/Reference tables#FAQS. Expeditions are the backbone of naval economy. They provide a safe, steady income of virtually all common in-game resources with the exception of improvement materials. • The fuel, ammo, steel and bauxite gained from expeditions can accumulate beyond the soft cap. This makes expeditions the best way to stockpile resources for events. • Expeditions also provide experience, making them useful for passively leveling weak ships. • A few expeditions are sortie support expeditions, which provide additional support fire for difficult sorties and events. • Many expeditions are also quest requirements. Expeditions themselves need to be unlocked first, usually by completing other expeditions or quests. Once they have been unlocked, they can be repeated indefinitely. The list of expeditions does not change from one update to another. However, support expeditions for special event maps might be temporarily included during events. ## Unlocking an expeditionEdit Expeditions are unlocked by completing Quest A4. Normally, completing an expedition will unlock the next one in numerical order. However, there are exceptions: Notice: There are still "exceptions" missing. If you find something that doesn't fit into the numerical ordering and isn't listed here please comment below. • Expedition 1~5 unlock Expedition #9. • Expedition 9~13 unlock Expedition #14. • Expedition 14 unlock Expedition #15 • Expedition 16-17 unlock Expedition #18. • Expedition 17 unlocks Expedition #21 • Expedition 18 unlocks Expeditions #22, 24, 25 and 33. • Expedition 20 unlocks Expedition #27. • Expeditions 1~8 unlock Expedition #32. • Expedition 26 unlocks Expeditions #34 and 35. • Expedition 36 unlocks Expedition #39 • Expedition 38 unlocks Expedition #40 Additionally, some expeditions require clearing a certain map and completing multiple expeditions. They are: • Clear 1-5, Expedition #9-20 and 27-29 unlocks Expedition 30 and 31 (for Quests D5 to D8). • Clear 1-5, Expedition #9-18, 25, 35 and 36 unlocks Expeditions 37, 38, 39 (for Quests D9, 11, 13, 14). • Clear 1-5, Expedition #9-18 and 22 unlocks Expedition 23 (for Quests D10 and D12). ## Sending a fleet on expeditionEdit For instructions on how to access Sortie screen, see Tutorial: How to Play. To send fleets on expedition: 1. Access the Sortie screen and click the rightmost, blue button. 2. Select an expedition. 3. Select a fleet to send on expedition. 4. Click the bottom right button to send the selected fleet on expedition. Once unlocked, an expedition can be repeated indefinitely without any penalty to subsequent runs. However, an expedition can only be attempted once at any given moment. Multiple fleets can go on different expeditions, but not the same expedition. The first fleet (main fleet) cannot go on an expedition; only the second, third and fourth fleet can. Every expedition has its own success conditions. Failure to meet the conditions results in a failed expedition. In which case, the fleet would still consume fuel, ammunition, and time specified by the expedition but no resources or reward will be returned. HQ XP gain from failed expeditions is significantly reduced to 30%, however, Ship XP gain remains unaffected. While this might prove desirable for those who want to suppress their HQ level, it is generally not recommended due to the resource costs involved. Check the top right corner in home screen for the bubble indicating an expedition has returned, click on the home screen to obtain rewards. • Expedition quests only counts when the expedition fleet returns therefore, one only need to activate the quest when the fleet returns. • Expedition can be completed earlier by entering the main screen when there is less than one minute remaining. • Ships lose 3 morale per expedition. Ship damages do not affect the outcome of the expedition. However, having a heavily-damaged ship as the flagship will prevent the fleet from going on an expedition. You can send a fleet to an expedition with a moderately-damaged ship as flagship or heavily-damaged ship as non-flagship and still obtain a success. Having a ship under repair in the fleet will prevent that fleet from going on expedition. ### Recalling an expeditionEdit In the event that the wrong fleet was accidentally sent on an expedition, an "Expedition Recall" is used to cancel the expedition and save the time that otherwise would be wasted during the expedition. An expedition canceled in this manner yields no resources, rewards, or experience. It is also counted as a fail on Admiral's profile. Supply and morale required by the expedition are still consumed. This should only be used with discretion, especially since a recall cannot be cancelled. The recalled fleet does not return immediately as it will take a while to return to base. The calculation for return call time is as follows: The smaller time between remaining time and elapsed time / 3. For example, a fleet is sent on expedition #1 (15 minutes), then after 6 minutes, it is recalled. 1. The remaining time is 9 minutes. 2. The elapsed time is 6 minutes. 3. The smaller number of the two, 6 minutes elapsed time in this case, is divided by 3. 4. The final recall time is 2 minutes. ## Success conditionsEdit ### Ratings and rewardsEdit Expeditions will always yield the primary resources upon success and they will always grant experience unless recalled. However, secondary item rewards are not guaranteed upon success. • For expeditions that give only one type of item, there is 50% chance of getting that item. • For expeditions that give two types of items, the item listed on the left will have 50% chance. The item on the right will only be awarded upon Great Success (100% chance). • Some expeditions have chances of giving more than 1 of the same item, such expeditions are given a value other than x1 (e.g., x2). Getting more than 1 is not guaranteed, the number listed is merely the maximum amount of items you can possibly get. The chances however, are equally split between each possible outcome. • For example, expedition 13 has a x2 buckets as a left item. This means you have 33% chance of gaining 2 buckets, 33% chance of gaining 1 bucket and 33% chance obtaining no buckets. • Expedition 40 has x3 small furniture coin boxes displayed as a left item. This means that you have 25% of gaining 3 boxes, 25% chance of gaining 2 boxes, 25% chance of gaining only 1 box and 25% to get no boxes at all. • Expedition 9 has x2 buckets displayed as a right item. The rule of requiring Great Success to get the right items still apply. As it is impossible to not get the right item if Great Success is achieved, the chances are equally split 50/50 of getting either 1 or 2 buckets. The outcome of expeditions depends on several conditions. There exists four possible outcomes; each of which have different resource, experience, and item reward yield ratios. Rating Resources Experiences[1] Rewards HQ XP[2] Flagship XP[3] Ship XP[3] Left Right Recall 0% 0% 0% 0% 0% 0% Failure 0% 30% 150% 100% 0% 0% Normal Success 100% 100% 150% 100% 50% 0% Great Success 150% 200% 300% 200% 50% 100% 1. There is 50% chance of experience doubling for ships, independent of the result (Failure, Normal or Great Success), each value from the table should be multiplied by 1.5 for an average value. 2. Relative to HQ experience for an expedition. 3. 3.0 3.1 Relative to ship experience for an expedition. #### Extra bonuses to expedition incomesEdit The formula is unfinished and still has counterexamples (all involve at least 3 Toku Daihatsus). Maximal bonuses weren't tested as well (requires multiple maxed Toku Daihatsus). Other than Great Success, which provides 50% bonus to expedition incomes, the following factors can provide further bonuses: In more detail, final resources (fuel, ammo, steel, bauxite) are given by $\lfloor R \cdot S \rfloor + \lfloor R \cdot S \cdot (B_{1} + B_{\star}) \rfloor + \lfloor R \cdot S \cdot (B_{2} + \text{?}) \rfloor$ $(= \sum_{i=0,1,2} \lfloor R \cdot S \cdot (B_{i} + B_{i}^{\star}) \rfloor)$ • For base resources (R), see the expedition reference tables • S is Great Success bonus, 1.5 for Great Success, 1 for Normal Success • B1, B2 and B represent three extra bonuses: • $B_{1} = min(0.2, \sum b_1)$, Kinu Kai Ni and landing craft bonuses (b1), summed and capped at 20% (e.g., 5% per each Daihatsu Landing Craft, 5% for Kinu Kai Ni, etc., up to 20%; see the table below for b1 values) • $B_{\star} := B_{1}^{\star} = B_{1} \cdot \bar{\bigstar} / 100$, landing craft improvement bonuses, depends on B1 and average number of stars over all equipped landing crafts ($\bar{\bigstar}$); thus, capped at 0.2 * 0.01 * 10 * n / n = 2%. • $B_{2} = min(cap, \sum b_2)$, additional landing craft bonuses (b2), summed and capped at a variable cap. Currently only applies to Toku Daihatsu Landing Craft; thus, can be referred to as Toku Daihatsu Landing Craft additional bonuses (2% per each Toku Daihatsu Landing Craft, up to the cap) • Presumably, cap = 0.05 (5%) for 3 Toku Daihatsu and 0.054 (5.4%) for 4 or more • The cap may interact with improvements (the $B_{2}^{\star}$ term, requires verification) b1 GS B1 GS b2 GS B2 GS GS b1+10★ GS B GS B1+B GS B1+B2 GS B1+B2+B GS TDLC 5% 7.5% 20% (120%) 30% (180%) 2% 3% 5%-5.4% 7.5%-8.1% 0.05% 0.075% 5.5% 8.25% 2% 3% 22% (122%) 33% (183%) 25%-25.4 (125%-125.4%) 37.5%-38.1% (187.5%-188.1%) 27%-27.4% (127%-127.4%) 40.5%-41.1% (190.5%-191.1%) DLC 0% Kinu Kai Ni No ★ for Kinu! DLC (Tank) 2% 3% 0.02% 0.03% 2.2% 3.3% KaMi 1% 1.5% 0.01% 0.015% 1.1% 1.65% • B1, B2, B, and their sums are maximal values; GS stands for values multiplied by the Great Success multiplier (1.5); values in brackets include the S ⋅ B0 term • Due to flooring, the formula can't be expressed as $\lfloor \sum R \cdot S \cdot (B_{i} + B_{i}^{\star}) \rfloor$, summary values in the table (in red) are not always exact. ### Normal SuccessEdit Meeting all the following conditions guarantees a Normal Success: 1. Fleet's supply Fleets sent on expedition must be fully resupplied. Missing even a bar of supply results in a failure. 2. Fleet's morale Fleets sent on expedition must not reach 39 morale or lower upon returning. 3. Flagship's level The level of the flagship (first ship of a fleet) must be greater than or equal to the flagship's level specified by the expedition. 4. Total level of fleet The sum level of every ship in the fleet, including the flagship must be greater than or equal to the total level of fleet specified by the expedition. 5. Ship types For a list of ship type abbreviations, see Glossary#Class classification. Most expeditions require a certain number of ships of specific types. The ships in a fleet can be in any order, but the required number of ships of each type must be met. Some ship types can be used as substitutes for other ship types, some remodeled ship types and their old models are not interchangeable, while other types follow a one-way relationship. • Light carriers [CVL] and Seaplane tenders [AV] can be used as substitutes for Fleet carriers [CV] • Aviation submarines [SSV] can be used as substitutes for Submarines [SS] • Aviation cruisers [CAV] and Heavy cruisers [CA] are not interchangeable. • Torpedo cruisers [CLT] and Light cruisers [CL] are not interchangeable • Battleships [BB] and Fast battleships [FBB] cannot be used as substitutes for Aviation battleships [BBV] • Light carriers [CVL] and Fleet carriers [CV] cannot be used as substitutes for Seaplane tenders [AV] 6. Number of ships in fleet Fleets sent on an expedition must contain at least a certain number of ships. After each of the ship type requirements are met, wild card ships (denoted as type XX in expedition tables) can be added to meet the minimum number of ships condition. Extra ships beyond the minimum number to clear other conditions are also allowed. 7. Number of drum canisters and drum carriers Expedition 21, 37, 38 also requires Drum Canisters to be equipped. The total number of drums and the number of ships carrying drums must be greater than or equal to the given minimum value. ### Great SuccessEdit For more information and tips on how to sparkle ships, see Morale/Fatigue. Great Success is the advanced form of Success. It grants 50% additional resources, double (x2) the Admiral experience, double (x2) the ship experience, and if applicable, the great success item reward. The Great Success chance is determined by the initial state of the fleet—ergo, even if none of your ships are sparkling when the expedition returns, you can still get a great success rating as long as they were sparkling when they left. Great Success still requires Normal Success conditions to be met. The factors that affect its chances of occurring are also different dependent on the expedition type: #### Regular ExpeditionEdit • The primary requirement for Great Success is that all ships in the expedition fleet must be sparkled (morale ≥ 50). If you have 6 ships in the expedition fleet and you decide to sparkle only 5 of the 6, you will not be able to achieve Great Success. • Great Success rate starts at 21% if all the requirements are met, and is increased by 15% for every sparkled ship. • 5 sparkled ships are enough for 96% Great Success rate. If the expedition requires 6 ships, you have to sparkle the 6th ship as well. • With 6 sparkled ships, you achieve a Great Success rate of 100%. Regular Expeditions # of sparkled ships GS Rate 1 36% 2 51% 3 66% 4 81% 5 96% 6 100% #### Drum Expedition (21, 24, 37, 38, 40)Edit • Unlike Regular expeditions, sparkling every ship in the fleet is not a necessity to be able to gain Great Success. These expeditions involves the use of and some expedition requires you to carry a certain amount of them as requirement. • Like Regular expeditions, the Great Success value starts at 21% if all requirements are met, but this is further influenced by the fact whether enough drums are equipped or not and how many sparkled ships you have in your fleet. • The Great Success rate is increased by: • Overdrumming, which involves the fleet carrying more than is required. This is only a one-time bonus of 20% per expedition fleet; equipping more drums than the required overdrum value does not increase the rate any higher. Note: If you do not overdrum, you will suffer a penalty of -15% instead. It is thus recommended to always try to overdrum whenever it is possible. • Carrying 25% more than the expedition requirement asks you, with the number rounded down. Expedition 21 for example, requires you to bring 3 drums, and 125% of 3 makes it 4. Thus, for expedition 21, you need 4 drums in total. • In case of expedition 24 and 40, where equipping drums is not a requirement for basic success, the requirement for overdrumming is equipping at least 4 drums. • Sparkling your ships. The Great Success rate increases per sparkled ship, just like Regular Expeditions. • Sparkling at least 4 ships combined with overdrumming will give you a 100% success rate in these expeditions. Note that it is impossible to achieve 100% if you do not utilize overdrumming, even if you bring 6 sparkled ships. Drum Expeditions # of sparkled ships Not Overdrummed Overdrummed 0 6% 41% 1 21% 56% 2 36% 71% 3 51% 86% 4 66% 100% 5 81% 100% 6 96% 100%
2017-04-29 23:19:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43163350224494934, "perplexity": 5271.43296588242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123632.58/warc/CC-MAIN-20170423031203-00057-ip-10-145-167-34.ec2.internal.warc.gz"}
http://mathhelpforum.com/number-theory/94307-field-question-help-please.html
# Math Help - field question help please 1. ## field question help please R(x)={P(x)/Q(x): p and q are polynomials} can you explain me how can i show this is a field?i reguest explain me step bye step please.thanks for your helps. 2. Your statement is very vague. These are polynomials over what? An integral domain? An arbitrary ring? A field? The real numbers? It's very important that you specify. Do you know the axioms for a field? If so, and you know what your polynomials are, you just have to verify each of the axioms : show that it is an abelian group under addition, that the nonzero elements form an abelian group under multiplication, that multiplication is distributive over addition, etc. 3. these are polynomials as in given set {sum i=1-->n(ai.xi)| ai element of R} i am asking how to show this is a field you are asking is this an integral domain,don't i have to show that this is an integral domain?and in my book there is a given theorem says:every field is an integral domain.i think i am confused when i translate in english. 4. Ok, so they are polynomials in $\mathbb{R}[x]$. The proof that rational functions are a field will depend on the properties of the integral domain $\mathbb{R}[x]$. Take a look at the various defining properties of a field here. Verify them one by one for the set of rational functions (quotient of polynomials in $\mathbb{R}[x]$) and your problem is solved; none of them is hard to establish. If you have trouble with one of them feel free to ask again.
2016-06-01 02:13:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7853078246116638, "perplexity": 270.5376084339095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464053379198.78/warc/CC-MAIN-20160524012939-00144-ip-10-185-217-139.ec2.internal.warc.gz"}
http://sporadic.stanford.edu/bump/group/gr2_2.html
# Characters of Abelian Groups Now let $G$ be a finite abelian group, which we will write multiplicatively. Let $L^2 (G)$ be the inner product space of all complex-valued functions on $G$, with the inner product $\left\langle \phi, \psi \right\rangle = \frac{1}{|G|} \sum_{g \in G} \phi (g) \overline{\psi (g)} .$ It is a finite-dimensional Hilbert space. By a linear character $\chi$ of $G$, we mean a homomorphism $\chi : G \longrightarrow \mathbb{C}^{\times}$. The linear characters form an abelian group under multiplication, which we will denote $G^{\ast}$. If $n$ is a positive integer, we will denote by $\mu_n$ the group of $n$-th roots of unity in $\mathbb{C}$. Lemma 2.2.1: If $\chi$ is a linear character of the finite group $G$, then $| \chi (g) | = 1$ for all $g \in G$. In fact, if $\chi (G) \subset \mu_n$, the group of $n$-th roots of unity in $\mathbb{C}$, where $n$ is the exponent of $G$. Proof. (Click to Expand/Collapse) It is trivial that if $g \in G$ then $g^n = 1$, and so $\chi (g)^n = 1$, and so $\chi (g) \in \mu_n$. Of course this implies $| \chi (g) | = 1$ in $\mathbb{C}$. Lemma 2.2.2: If $\chi, \theta \in G^{\ast}$, then $\left\langle \chi, \theta \right\rangle = \left\{ \begin{array}{ll} 1 & \text{if \chi = \theta,}\\ 0 & \text{otherwise} . \end{array} \right.$ Proof. (Click to Expand/Collapse) If $\chi \neq \theta$ then $\chi (x) \neq \theta (x)$ for some $x \in G$. Since $| \theta (g) | = 1$, $\overline{\theta (g)} = \theta (g)^{- 1}$ for all $g \in G$. Now $\left\langle \chi, \theta \right\rangle = \frac{1}{|G|} \sum_{g \in G} \chi (g) \overline{\theta (g)} = \frac{1}{|G|} \sum_{g \in G} \chi (g) \theta (g)^{- 1} .$ Now we permute the elements of $G$ by making the substitution $g \longmapsto g x$ and obtain $\left\langle \chi, \theta \right\rangle = \frac{1}{|G|} \sum_{g \in G} \chi (x g) \theta (x\,g)^{- 1} = \chi (x) \theta (x)^{- 1} \frac{1}{|G|} \sum_{g \in G} \chi (x g) \theta (x\,g)^{- 1} = \chi (x) \theta (x)^{- 1} \left\langle \chi, \theta \right\rangle .$ Since $\chi (x) \neq \theta (x)$, this implies that $\left\langle \chi, \theta \right\rangle = 0$. On the other hand if $\chi = \theta$ then $\chi (g) = \theta (g)$ for all $g$, so $\left\langle \chi, \theta \right\rangle = \frac{1}{|G|} \sum_{g \in G} \chi (g) \theta (g)^{- 1} = \frac{1}{|G|} \sum_{g \in G} 1 = 1.$ We see that the linear characters of $G$ are orthonormal. We will eventually show that they are an orthonormal basis of $L^2 (G)$, but we need some further preparations before we can show this. Lemma 2.2.3: If $G$ is finite abelian group and $H$ a proper subgroup, and if $\chi$ is a linear character of $H$, then $\chi$ can be extended to a subgroup of $G$ that is larger than $H$. To say that $\chi$ can be extended to a subgroup $K$ of $G$ that contains $H$ means that we can find a linear character $\tilde{\chi}$ of $K$ such that $\tilde{\chi} (h) = \chi (h)$ when $h \in H$. Proof. (Click to Expand/Collapse) Let $x \in G - H$. There is a smallest positive integer $d$ such that $x^d \in H$. Then $x^n \in H$ if and only if $n$ is a multiple of $d$. Find a complex number $a$ such that $a^d = \chi (x^d)$. Let $K = \left\langle H, x \right\rangle$. This is the subgroup of elements of $G$ that can be written in the form $x^n h$ for some $h \in H$. We claim that we can define a character $\tilde{\chi}$ of $K$ by $\tilde{\chi} (x^n h) = a^n \chi (h)$. We must check that this is well defined. If $x^n h = x^m h'$, then $h' h^{- 1} = x^{n - m}$, so $n - m$ is a multiple of $d$, say $n - m = d k$. Then $\chi (h') \chi (h)^{- 1} = \chi (x^{n - m}) = \chi (x^d)^k = a^{d k} = a^{n - m},$ which implies that $a^n \chi (h) = a^m \chi (h')$. Thus $\tilde{\chi}$ is well-defined. It is easily seen to be a homomorphism, that is, a linear character. Proposition 2.2.1: Let $G$ be a finite abelian group, and let $H$ be a subgroup of $G$. Let $\chi$ be a linear character of $H$. Then $\chi$ can be extended to a linear character of $G$. Proof. (Click to Expand/Collapse) Let $\Sigma$ be the set of subgroups $K$ of $G$ such that $K \supseteq H$ and $\chi$ can be extended to $K$. The set $\Sigma$ is nonempty since $H \in \Sigma$, so let $K$ be a maximal element. If $K$ is a proper subgroup of $G$, then an extension of $\chi$ to $K$ exists but cannot be extended to any larger subgroup, which contradicts Lemma 2.2.3. Thus $K = G$. Proposition 2.2.2: Let $G$ be a finite abelian group, and let $x, y \in G$. If $\chi (x) = \chi (y)$ for all $\chi \in G^{\ast}$, then $x = y$. Proof. (Click to Expand/Collapse) Let $z = x y^{- 1}$ have order $n$. We can define a linear character of $\left\langle z \right\rangle$ by $\chi (z^k) = e^{2 \pi i k / n}$. If $z \neq 1$ then $\chi (z) \neq 1$, and extending $\chi$ to a character of $G$ by Proposition 2.2.1 gives a contradiction. So $z = 1$ and $x = y$. Let $G$ be a finite abelian group, and let $x \in G$. Then $x$ determines a function $\check{x}$ on $G^{\ast}$, namely the map $\check{x} (\chi) = \chi (x)$. The fact that $x$ is determined by $\check{x}$ is a consequence of Proposition 2.2.2. Proposition 2.2.3: Let $G$ be a finite abelian group. (i) We have $|G| = |G^{\ast} |$. (ii) If $x \in G$, then $\check{x} \in (G^{\ast})^{\ast}$, and the map $x \longmapsto \check{x}$ is an isomorphism $G \longrightarrow (G^{\ast})^{\ast}$. Proof. (Click to Expand/Collapse) We have $\check{x} (\chi \chi') = \chi \chi' (x) = \chi (x) \chi' (x) = \check{x} (\chi) \check{x} (\chi')$, so $\check{x}$ is a character of $G^{\ast}$. To see that $x \longmapsto \check{x}$ is a homomorphism, observe that $\check{x} \check{y} (\chi) = \check{x} (\chi) \check{y} (\chi) = \chi (x) \chi (y) = \chi (x y) = \check{(x y)} (\chi),$ because $\chi$ is a character. We see that $x \longmapsto \check{x}$ is a homomorphism $G \longrightarrow (G^{\ast})^{\ast}$. We can now prove (i) and (ii) simultaneously. We first observe that $|G^{\ast} | \le |G|$ since the linear characters are an orthonormal set, hence linearly independent. Applying this twice, $| (G^{\ast})^{\ast} | \le |G|$. But $x \longmapsto \check{x}$ is a homomorphism $G \longrightarrow (G^{\ast})^{\ast}$ that is injective by Proposition 2.2.2. We see that $|G| = | (G^{\ast})^{\ast} |$ and $x \longmapsto \check{x}$ is an isomorphism. Now $|G| = | (G^{\ast})^{\ast} | \le |G^{\ast} |$ and so $|G| = |G^{\ast} |$. Because $x \longmapsto \check{x}$ is an isomorphism, we may identify $x$ with $\check{x}$ and regard elements of $G$ as characters of $G^{\ast}$. This means that the roles of $G$ and $G^{\ast}$ are symmetrical. Theorem 2.2.1: Let $G$ be a finite abelian group. Then $G^{\ast}$ is an orthonormal basis of $L^2 (G)$. Proof. (Click to Expand/Collapse) We have already shown that $G^{\ast}$ is an orthonormal set, hence linearly independent. But $|G^{\ast} | = |G| = \dim (L^2 (G))$, and so they are a basis. Exercise 2.2.1: If $G$ and $H$ are finite abelian groups, prove that $(G \times H)^{\ast} \cong G^{\ast} \times H^{\ast} .$ Exercise 2.2.2: If $G$ is a finite abelian group, prove that $G \cong G^{\ast}$. (Hint: reduce to the case of a cyclic group.) Exercise 2.2.3: (Fourier inversion formula.) Let $\mathcal{F}: L^2 (G) \longrightarrow L^2 (G^{\ast})$ be the Fourier transform, defined by $\mathcal{F}f = \hat{f}$, where $\hat{f}$ is the function on $L^2 (G^{\ast})$ defined by $\hat{f} (\chi) = \frac{1}{\sqrt{|G|}} \sum_{x \in G} \chi (x) f (x) .$ Prove that $f (g) = \frac{1}{\sqrt{|G|}} \sum_{x \in G} \overline{\check{x} (\chi)} \hat{f} (x) .$ Exercise 2.2.4: (Plancherel formula.) Prove that $\mathcal{F}$ is an isometry, that is $\left\langle f_1, f_2 \right\rangle = \left\langle \widehat{f_1}, \widehat{f_2} \right\rangle$. Although Exercise 2.3 shows that $G \cong G^{\ast}$, this is a less natural isomorphism than the isomorphism $G \cong (G^{\ast})^{\ast}$. The isomorphism $G \longrightarrow (G^{\ast})^{\ast}$ was defined in a canonical way, but any description of the isomorphism $G \cong G^{\ast}$ will depend on arbitrary choices. For example, if you solve Exercise 2.2.2 by first decomposing $G$ as a direct product of cyclic groups, the proof will depend on the choice of this decomposition. The operation $^{\ast}$ is actually a functor, which means that it is not only an operation on abelian groups, but also on their homomorphisms. Indeed, if $f : G \longrightarrow H$ is a homomorphism of abelian groups, then there is induced a homomorphism $f^{\ast} : H^{\ast} \longrightarrow G^{\ast}$, which is composition with $f$. Thus if $\chi \in H^{\ast}$, then $\chi \circ f \in G^{\ast}$, and this is $f^{\ast} (\chi)$. Now the functor $\ast$ is contravariant since it reverses the direction of arrows. On the other hand, iterating it gives a covariant functor $\ast \ast$, since the direction of arrows is twice reversed: since $f^{\ast}$ is a map $H^{\ast} \longrightarrow G^{\ast}$, $(f^{\ast})^{\ast}$ is a map $(G^{\ast})^{\ast} \longrightarrow (H^{\ast})^{\ast}$. Now the naturality of the isomorphism $\check : G \longrightarrow (G^{\ast})^{\ast}$ can be expressed with the observation that the following diagram commutes: No such property exists for the contravariant functor $\ast$. Yet another reason that the isomorphism $G \cong G^{\ast}$ should be regarded as less fundamental than the isomorphism $G \cong (G^{\ast})^{\ast}$ is that the whole theory can be generalized to the setting of topological groups. Specifically, let $G$ be a locally compact abelian group. This means first of all, that $G$ is a Hausdorff topological group (so that it is a Hausdorff topological space, and the group operations are continuous) and that every point has a neighborhood whose closure is compact; and that $G$ is abelian. In this setting, everything we have done except Exercise 2.3 goes through without essential change. The characters $\chi : G \longrightarrow \mathbb{C}^{\times}$ are required to be continuous and unitary, which means that $| \chi (g) | = 1$. The character group $G^{\ast}$ is given the topology where a sequence converges if it converges uniformly on compact sets. We have $G \cong (G^{\ast})^{\ast}$ (Pontriagin duality) and the Fourier transform is an isometry $L^2 (G) \longrightarrow L^2 (G^{\ast})$. Fourier analysis was first carried out in the setting of locally compact abelian groups in a monograph of André Weil. However $G$ and $G^{\ast}$ may or may not be isomorphic. We have seen that they are isomorphic of $G$ is finite; or if $G =\mathbb{R}$ (the additive group) or $\mathbb{Q}_p$ (the additive group of $p$-adic numbers) then $G \cong G^{\ast}$. But if $G =\mathbb{R}/\mathbb{Z}$ then $G^{\ast} =\mathbb{Z}$, and it is in this setting that most people first encounter Fourier analysis. A function $f$ on the circle $G =\mathbb{R}/\mathbb{Z}$ is transformed into a sequence of coefficients $\hat{f} (n)$ where $\hat{f} (n) = \int_0^1 f (x) e^{i n x} d x.$ The integer $n$ corresponds to the character $e^{i n x}$ of $\mathbb{R}/\mathbb{Z}$, and the Plancherel formula is the assertion that $\int_0^1 f (x) \overline{f' (x)} \, d x = \sum_n \hat{f} (n) \overline{\hat{f}' (n)} .$
2017-10-18 05:36:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9853226542472839, "perplexity": 51.28190471442152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822747.22/warc/CC-MAIN-20171018051631-20171018071631-00162.warc.gz"}
https://proofwiki.org/wiki/Center_is_Characteristic_Subgroup
# Center is Characteristic Subgroup ## Theorem Let $G$ be a group. Then its center $\map Z G$ is characteristic in $G$. ## Proof Let $\phi$ be an automorphism of $G$. Let $x \in \map Z G, y \in G$. Then: $\displaystyle \map \phi x y$ $=$ $\displaystyle \map \phi x \map \phi {\map {\phi^{-1} } y}$ automorphisms are bijections $\displaystyle$ $=$ $\displaystyle \map \phi {x \map {\phi^{-1} } y}$ Definition of Group Homomorphism $\displaystyle$ $=$ $\displaystyle \map \phi {\map {\phi^{-1} } y x}$ Definition of Center of Group $\displaystyle$ $=$ $\displaystyle \map \phi {\map {\phi^{-1} } y} \map \phi x$ Definition of Group Homomorphism $\displaystyle$ $=$ $\displaystyle y \map \phi x$ Hence $\map \phi x \in \map Z G$. So we have $\phi \sqbrk {\map Z G} \subseteq \map Z G$. Since $\phi^{-1}$ is also an automorphism: $\phi^{-1} \sqbrk {\map Z G} \subseteq \map Z G$ Since $\phi$ is a bijection: $\map Z G = \phi \sqbrk {\phi^{-1} \sqbrk {\map Z G}} \subseteq \phi \sqbrk {\map Z G}$ Therefore we conclude that $\phi \sqbrk {\map Z G} = \map Z G$. Hence $\map Z G$ is characteristic in $G$. $\blacksquare$
2020-07-03 14:00:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9994267225265503, "perplexity": 318.41610561945845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882051.19/warc/CC-MAIN-20200703122347-20200703152347-00454.warc.gz"}
http://beesbuzz.biz/blog/?tag=old+enby+yells+at+cloud
David Yates wrote a great defense of RSS which I completely agree with. To summarize the salient points: Most IndieWeb folks are also really gung-ho about mf2 and h-feed, and while I don’t see any reason not to support it (and it certainly does have some advantages in terms of it being easier to integrate into a system that isn’t feed-aware or convenient to set up multiple templates), I’ve run into plenty of pitfalls when it comes to actually adding mf2 markup to my own site (for example, having to deal with ambiguities with nesting stuff and dealing with below-the-fold content, not to mention a lot of confusion over things like p-summary vs. e-content), and so far there doesn’t seem to be any real advantage to doing so since everything that supports h-feed also supports RSS/Atom, as far as I’m aware.
2019-10-18 07:29:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5347771644592285, "perplexity": 916.3638712822595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677964.40/warc/CC-MAIN-20191018055014-20191018082514-00526.warc.gz"}
https://indico.mitp.uni-mainz.de/event/84/contributions/2550/
55. International Winter Meeting on Nuclear Physics 23-27 January 2017 Bormio, Italy Europe/Berlin timezone The cosmological Lithium problem within a nonextensive statistical mechanics approach Not scheduled 20m Bormio, Italy Bormio, Italy Short Contribution Speakers Andrea Lavagno (Politecnico di Torino) Gianpiero Gervino (Università di Torino) Description The recent important experiment on neutron reaction on $^7$Be of n_TOF at CERN has shown that the “$^7$Li problem” is still not solved. In the recent past we have shown that small deformations of the tails of distribution functions of ions in stellar plasma produce sensible enhancements or depletions of the nuclear fusion rates. The astrophysical environment influences the nuclear reactions (through, for instance, space and time nuclear correlations). These effects can be taken into account by means of a deformed statistics featured by an entropic parameter, the deviation from its standard value is responsible of deformations of the distribution functions and therefore of nuclear rates enhancement or depletion. A recent investigation has recently shown that the “$^7$Li problem” could be solved adopting a generalized statistical distribution (arXiv:1412.6956). In this work we want to make evident the physical meaning of the entropic parameter in astrophysical plasma where $^7$Li is produced or destroyed and to show how nuclear fusion rates of interest in this problem can be evaluated (with the use of LUNA of Gran Sasso Lab experimental results) toward a “$^7$Li problem” solution. Primary author Piero Quarati (Politecnico di Torino) Co-authors Andrea Lavagno (Politecnico di Torino) Gianpiero Gervino (Università di Torino) Presentation Materials There are no materials yet.
2020-08-11 22:14:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5711685419082642, "perplexity": 4149.829509678606}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738855.80/warc/CC-MAIN-20200811205740-20200811235740-00069.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=145&t=12354
## Steady State and Pre Equilibrium Approach $aR \to bP, Rate = -\frac{1}{a} \frac{d[R]}{dt} = \frac{1}{b}\frac{d[P]}{dt}$ Sandeep Gurram 2E Posts: 42 Joined: Fri Sep 25, 2015 3:00 am ### Steady State and Pre Equilibrium Approach When do we know to use the steady state or pre equilibrium approach in determining whether a proposed elementary mechanism matches an observed rate law? Are there definitive clues to look out for? Thank You Naiomi Desai Posts: 9 Joined: Fri Sep 25, 2015 3:00 am ### Re: Steady State and Pre Equilibrium Approach In this class, particularly, we are only going to use the Pre-Equilibrium approach since, like Dr. Lavelle mentioned in class, the Steady State approach is very mathematically involving. This being said, it does not matter for most of the calculations which method you use as the outcome should come out the same (there are exceptions when the Pre-Equilibrium method would not work, but we do not have to worry about them). Hope I could clear some confusion :)
2020-07-13 12:36:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3727723956108093, "perplexity": 1918.490297620393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657143365.88/warc/CC-MAIN-20200713100145-20200713130145-00484.warc.gz"}
https://www.esaga.uni-due.de/marc.levine/Courses/2021/MotivesSeminarSS/
# Motives Seminar-SS 2021 ## Tropical methods in enumerative geometry This semester we will examine an approach to enumerative geometry via tropical geometry, especially related to the case of enumerative problems over the reals. We will present some of the history and recent developments in this area. The ultimate goal is to learn how to apply these methods to more general fields using methods of motivic homotopy theory, but due to time constraints, this aspect will not appear in the seminar program except perhaps during our discussions. The seminar will take place online via Zoom; we meet on Tuesdays, 16-18 Uhr. If you are interested in attending the seminar or giving one of the lectures, please contact me (Marc Levine) at marc.levine@uni-due.de. ### Program: Refined Tropical Invariants Tropical enumerative invariants are invariants associated to toric surfaces that are defined in a purely combinatorial manner and coincide with classical complex and real geometric ones. We will present the definition and properties of the classical and tropical invariants, and the equalities they satisfy. Lecture 1 (13.04-Andrés Jaramillo Puentes) Introduction to the seminar Lecture 1 Slides Lecture 1 Video Lecture 2 (20.04-Michele Ancona) Kontsevich Formula for enumeration of Rational Plane Curves [KM] Lecture 2 Notes Lecture 2 Video The numbers of complex plane rational curves of degree d through $3d-1$ general given points can be computed recursively with Kontsevich's formula. The proof is obtained by degenerating the configuration of points to a non-generic configuration so that the curves split into curves of smaller degree and uses a map to the moduli space $\bar{M}_{0,4}$, in which the cohomology classes of any two points is the same, in particular of two points in the boundary $\bar{M}_{0,4}\setminus M_{0,4}$. The goal of the talk is to survey this construction in detail and understand the combinatorial coefficients of the formula. Lecture 3 (27.04-Sabrina Pauli) Caporasso-Harris recursion formula for plane curves of positive genus [CH] Lecture 3 Notes Lecture 3 Video Caporaso and Harris have found a nice way to compute the numbers $N(d,g)$ of complex plane curves of degree $d$ and genus $g$ through $3d+g-1$ general points with the help of relative Gromov-Witten invariants. They use the same idea of degenerating the configurations of points towards a non-generic configuration (e.g. putting three points over a fixed line, in which case the GW invariants would be relative to such) and forcing the counted curves to split into products of curves of lesser degree and genus. The goal of the talk is to understand the use of these relative invariants in exploiting the argument for rational curves. Lecture 4 (04.05-Marc Levine) Welschinger invariants for enumeration of real curves [W] Lecture 4 Notes Lecture 4 Video Over the reals, the number of curves of fixed degree passing through a generic configuration of points is not invariant (i.e. different configurations yields different amounts of curves). Welschinger introduced coefficients for the rational curves so that counting the curves with this coefficients is an invariant. The goal of the talk is to understand the proof of invariance (at least for the real projective plane); the new families of invariants that appears depend on the number of complex conjugate pairs of the configuration of points. You should also mention the formulas satisfied by this invariant for the blow up of surfaces. Lecture 5 (11.05-Alessandro D'Angelo) Introduction to Tropical Geometry I. [BIMS] [IMS] Lecture 5 Slides Lecture 5 Video The goal of this talk is to introduce the tropicalization map for a plane curve. The first part is an introduction of tropical curves in $\mathbb{R}^2$ (tropical algebra, zeros of a polynomial, hypersurfaces, balancing condition, degree, genus). A second part should give an introduction to amoebas and the tropical map. Lecture 6 (18.05-Maria Yakerson) Introduction to Tropical Geometry II. [BIMS] [IMS] [M] Lecture 6 Notes Lecture 6 Video Having introduced tropical curves in Lecture 5, we would like to explore some of their properties. The goal of the talk is to introduce the Newton polytope of a polynomial, its subdivision and its duality with respect to the tropical curve; intersection multiplicity and Bezout's theorem; and the floor diagram. [BIMS] [IMS] [M] Lecture 7 (25.05-Heng Xie) Tropical Gromov-Witten invariants [M] Lecture 7 Slides Lecture 7 Video Given a generic configuration of points in $\mathbb{R}^2$, counting the tropical curves of fixed degree and genus with a complex (or real) multiplicity yields an invariant. The goal of this talk is introduce the complex and real multiplicities associated to a tropical curve and to survey the invariance with respect to the configuration of points. Lecture 8 (01.06-Manh Toan Nguyen) Mikhalkin correspondence Theorem [M] Lecture 8 Notes Lecture 8 Video The invariant obtained by counting the tropical curves of fixed degree and genus with a complex (or real) curve passing through a generic configuration of points coincide with the Gromov-Witten invariant (or Welschinger invariant, respectively). The goal of this talk is to survey the proof of this equality. Lecture 9 (08.06-Longting Wu) Block-Göttsche refined invariants [IM] Lecture 9 Notes Lecture 9 Video Block-Göttsche introduced a quantum multiplicity for tropical curves. It's a symmetric Laurent polynomial that specializes to the Mikhalkin complex and real multiplicites. The goal of this talk is to introduce this Block-Göttsche multiplicity, survey the proof of the new invariant defined by counting the tropical curves of given genus and degree passing through a generic configuration of points counted with this quantum multiplicity, and to show examples, presenting its specialization to the Gromov-Witten and Welschinger invariants. Lecture 10 (15.06-Viktor Kleen) Broccoli invariants [GMS] Lecture 10 Notes Lecture 10 Video Broccoli curves are purely combinatorial objects that were introduced in order to prove the invariance of the corresponding Welschinger numbers and also to find formulas to compute them. The goal of this talk is to introduce them, survey the proof of their invariance and explain the relation with Welschinger numbers. Lecture 11 (22.06-Tariq Syed) Refined broccoli invariants [GS16] Lecture 11 Slides Lecture 11 Video The goal of this talk is to introduce the refined broccoli invariants obtained by counting the broccoli curves with the quantum multiplicity, and to show examples of this invariant specializing to Welschinger invariants corresponding to configuration of points with complex conjugate pairs. Lecture 12 (29.06-Sabrina Pauli) Caporasso-Harris formula for refined invariants [BG][GM] Lecture 12 Notes Lecture 12 Video The Caporasso-Harris formula defines a recursion that allow us to calculate positive genus invariants. The goal of this talk is to understand a tropical version (with quantum multiplicities) of this recursion using relative invariants and floor diagrams, and to present some computations in low degree. Lecture 13 (06.07-Andrés Jaramillo Puentes) Node polynomials and the Göttsche conjecture [AB][GS12][BJP] Lecture 13 Slides Lecture 13 Video Block and Göttsche showed that coefficients of fixed degree of the refined node polynomials (or Severi degrees) are polynomial with respect to the degree of the counted curves. Ardila and Block showed that this phenomena occurs for node polynomials when counting curves on other toric surfaces (like the Hirzebruch surfaces). Brugallé and Jaramillo Puentes showed that while the Gromov-Witten invariants for a fixed genus are exponential, coefficient of fixed codegree are polynomial with respect to the degree or with respect the Newton polygon of the toric surface. The goal of this talk is to explain how these polynomial properties are generalizations of the Göttsche conjecture for node polynomials. ### References [AB] F. Ardila, F. Block. Universal Polynomials for Severi Degrees of Toric Surfaces arXiv:1012.5305 [BG] F. Block and L. Göttsche. Refined curve counting with tropical geometry. Compositio Mathematica, 152(1):115–151, 2016. [BJP] E. Brugallé, A. Jaramillo Puentes. Polynomiality properties of tropical refined invariants. https://arxiv.org/abs/2011.12668 [BIMS] E. Brugallé, I. Itenberg, G. Mikhalkin, and K. Shaw. Brief introduction to tropical geometry, arXiv:1502.05950 Proceedings of Gokova Geometry/Topology conference 2014, International Press (2015), 1 - 75. [BM] E. Brugallé,  G. Mikhalkin. Floor decompositions of tropical curves : the planar case arXiv:0812.3354 [CH] L. Caporaso and J. Harris, Counting plane curves of any genus, Invent. math. 131  (1998), 345–392 [GM] A. Gathmann, H. Markwig. The Caporaso-Harris formula and plane relative Gromov-Witten invariants in tropical geometry arXiv:math/0504392 [GMS] A. Gathmann, H. Markwig, and F. Schroeter, Broccoli curves and the tropical invariance of Welschinger numbers Adv. Math. 240 (2013), 520–574. [GS12] L. Göttsche, V. Shende Refined curve counting on complex surfaces arXiv:1208.1973 [GS16] L. Göttsche, F. Schroeter. Refined broccoli invariants arXiv:1606.09631 [IM] I. Itenberg, G. Mikhalkin. On Block-Goettsche multiplicities for planar tropical curves arXiv:1201.0451 , Intern. Math. Res. Notices 23 (2013), 5289 – 5320 [IMS] I. Itenberg, G. Mikhalkin and E. Shustin. Tropical Algebraic Geometry, Birkhäuser, Oberwolfach Seminars Series, Vol. 35, 2007. [KM] M. Kontsevich and Y. I. Manin. Gromov-Witten classes, quantum cohomology, and enumerative geometry. Comm. Math. Phys. 164 (1994),  525–562. (hep-th/9402147). [M] G. Mikhalkin. Enumerative tropical algebraic geometry in $\mathbb{R}^2$, arXiv:math/0312530  Journal of the AMS. [W] J.Y. Welschinger. Invariants of real symplectic 4-manifolds and lower bounds in real enumerative geometry. Invent. Math., 162(1):195–234, 2005
2021-07-29 09:26:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5874580144882202, "perplexity": 1526.7106779500439}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153854.42/warc/CC-MAIN-20210729074313-20210729104313-00290.warc.gz"}
http://math.stackexchange.com/questions/112487/the-pushfoward-in-the-context-of-sets
The Pushfoward in the Context of Sets I am trying to understand the meaning of a pushforward in the simplest context possible where the functions involved are defined on sets. From my readings in differential geometry, I have arrived at the following understanding that I have attempted to codify in a precise definition. Unfortunately, I have not been able to find a reference that defines the pushforward in this minimal context. My proposed definition is as follows: Let $\phi:X \rightarrow Y$ be a bijection and let $f:X\rightarrow Z$ be any function from $X$ to the set $Z$. Then, the pushforward of $f$ by $\phi$ is a map $$\phi_*:Z^X \rightarrow Z^Y$$ defined by $$\phi_* f := f \circ \phi^{-1}.$$ So my question: Is this definition correct and is this is the right way to think of pushforward when only maps between sets are involved? - Pardon my ignorance, but isn't the terminology of "pushforward" (as well pullback/pushout/etc.) is a bit categorical? Shouldn't you therefore add a tag for [category-theory] or something similar? –  Asaf Karagila Feb 23 '12 at 15:45 @AsafKaragila Not really sure how it should be tagged. The only place I've encountered "pushforward" is within the context of manifolds but I'm trying to understand the operation at the most fundamental level. For all I know, the "pushforward" may not even technically be defined when only sets are involved...hence the question. –  ItsNotObvious Feb 23 '12 at 15:48 @Asaf: For what it is worth, Mac Lane's "Categories for the working mathematician" does not have "pushforward" in the index. I think you are confusing it with "pushout". –  Arturo Magidin Feb 23 '12 at 15:50 This is just a special case of the standard fact that a map $f\colon A\to B$ induces a homomorphism $f\colon\mathrm{Hom}(B,Z)\to\mathrm{Hom}(A,Z)$ by precomposition; the only difference is that you are "looking" at $\phi\colon X\to Y$ instead of looking at $\phi^{-1}\colon Y\to X$, which is what you are using for the construction. –  Arturo Magidin Feb 23 '12 at 15:51 I think that pushforward usually refers to a covariant alternative to the pullback. Hence, in some cases, you can get a covariant map between your Hom-sets (for example, in the context of homology or sheaf theory), but I don't think it's possible in general (for example, on manifolds, to pushforward a vector field, you usually need a diffeomorphism). –  M Turgeon Feb 23 '12 at 16:01
2014-11-26 12:51:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9572742581367493, "perplexity": 231.42328068820132}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006855.76/warc/CC-MAIN-20141125155646-00181-ip-10-235-23-156.ec2.internal.warc.gz"}
http://www.gamedev.net/index.php?app=forums&module=extras&section=postHistory&pid=5048237
• Create Account ### #Actualstein102 Posted 30 March 2013 - 12:11 AM This is my code for the ball colliding with the paddle in my game. As of right now, if the ball approaches the paddle and hits it on the corner, the ball seems to get trapped inside of the paddle. Any suggestions? I'm using the Slick2d API if it makes a difference. if (ball.ballCircle.intersects(paddle.paddleRect)) { //left ball.setDx(-ball.getDx()); } //Right ball.setDx(-ball.getDx()); } //Top ball.setDy(-ball.getDy()); } } ### #2stein102 Posted 30 March 2013 - 12:11 AM This is my code for the ball colliding with the paddle in my game. As of right now, if the ball approaches the paddle and hits it on the corner, the ball seems to get trapped inside of the paddle. Any suggestions? if (ball.ballCircle.intersects(paddle.paddleRect)) { //left ball.setDx(-ball.getDx()); } //Right ball.setDx(-ball.getDx()); } //Top ball.setDy(-ball.getDy()); } } ### #1stein102 Posted 30 March 2013 - 12:10 AM This is my code for the ball colliding with the paddle in my game. As of right now, if the ball approaches the paddle and hits it on the corner, the ball seems to get trapped inside of the paddle. Any suggestions? if (ball.ballCircle.intersects(paddle.paddleRect)) { //Temporary fix //left ball.setDx(-ball.getDx()); } //Right ball.setDx(-ball.getDx()); } //Top
2014-08-23 13:40:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5119990110397339, "perplexity": 1453.7866418252843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826025.8/warc/CC-MAIN-20140820021346-00337-ip-10-180-136-8.ec2.internal.warc.gz"}
https://rovislab.com/gridsim.html
>>Grid Sim GridSim: A Vehicle Kinematics Engine for Deep Neuroevolutionary Control in Autonomous Driving 12 February 2019 Abstract: Vehicle control has been a challenge in the field of autonomous driving for many years. Current state of the art solutions mainly use supervised end-to-end learning, or decoupled perception, planning and action pipelines. If we consider autonomous driving as a multi-agent setting, deep reinforcement learning is also suitable for solving the driving task. However, both deep Q-learning and policy gradients methods require that the agent interacts with its surroundings in order to learn its behavior via reward signals. In autonomous driving this task is performed within a simulated environment. In this paper we introduce GridSim, which is an autonomous driving simulator engine that uses a car-like robot architecture to generate occupancy grids from simulated sensors. It allows for multiple scenarios to be easily represented and loaded into the simulator as backgrounds. We use GridSim to study the performance of two deep learning approaches used in autonomous driving, that is, deep reinforcement learning and driving behavioral learning through genetic algorithms. The algorithms are evaluated on simulated highways, curved roads and inner-city scenarios, all including different driving limitations. The methods are evaluated by monitoring different performance metrics. The deep network used for vehicle control uses sequences of synthetic occupancy grids as input, while encoding the desired behavior in a two elements fitness function describing a maximum travel distance and a maximum forward speed, bounded to a specific interval. Citation @article{GridSim2019,     author = {Bogdan Trasnea and Andrei Vasilcoi and Claudiu Pozna and Sorin Grigorescu},     title = {GridSim: A Vehicle Kinematics Engine for Deep Neuroevolutionary Control in Autonomous Driving},     booktitle = {Int. Conf. on Robotic Computing IRC 2019},     address = {Naples},     month = {25-27 February},     year = {2019} } 1. Architecture GridSim and two possible pipelines for the deep neural control of a simulated car. (top) GridSim driving scenarios. (middle) DQN Agent pipeline using the input OGs for interacting with the simulated environment in order to maximize its reward function. (bottom) Neuroevolutionary Agent: the DNN’s weights are evolved using genetic algorithms with altered breeding rules, in order to maximize a fitness function. 2. Vehicle Dynamics Simulation Engine The simulation engine uses the non-holonomic robot car kinematics [1]. The steering is modelled through angle δ as an extra degree of freedom on the front wheel, while the ”non-holonomic” assumption is expressed as a differential constraint on the motion of the car, which restricts the vehicle from making lateral displacements, without simultaneously moving forward. GridSim contains a menu which allows the switching between multiple scenarios which are easily represented and loaded into the simulator as backgrounds. Snapshots of GridSim’s possible scenarios can be seen in the top of the above figure. The simulated sensors have a field of view (FOV) of 120 degrees. They react when an obstacle is sensed, by marking it as an occupied area. The static obstacles are a priori mapped to the backgrounds as lists of polygons. The simulated sensors continuously check if the perception rays are colliding with the given polygons. 3. Method We use GridSim to study the performance of two simulation based autonomous driving approaches: deep reinforcement learning and the control of a deep neuroevolutionary agent. The neuroevolutionary part of the algorithm represents the evolution of the weights of a deep neural network by using a population-based genetic algorithm, with altered breeding rules (custom tournament selection). The training is performed against a multi-objective fitness function which maximizes two elements: the traveled path and the longitudinal speed. This learning procedure was first proposed by the authors for training a generative one-shot learning classifier [2]. It aims to compute optimal weights for a collection of K deep networks: $\delta(\cdot;\theta) = [\delta_1(\cdot;\theta_1),...,\delta_i(\cdot;\theta_i), ..., \delta_K(\cdot;\theta_K)]^T$ The agent controls the ego-car using the elite individual DNN from the given generation, while the custom tournament selection algorithm ensures that the best accuracy individuals carry on to the next generation unmodified. As a comparison to our Neuroevolutionary Agent, we have implemented a DQN agent, which uses a decision space of eight actions. The algorithm starts from an initial state and proceeds until the the agent has collided with its surroundings. In every step, the agent is described by the current state s, it follows policy p(s) and observes the next state together with the reward received from the environment. The reward policy is constructed in the following way: $r = f(d_t) * 0.15 + f(v_t) * 0.15 + f(S_t) * 0.7$ $S = [s_1, s_2, ..., s_n]^T$ $f(S) = min(S)$ where r is the total normalized reward, f(d) is the distance travelled, f(v) is the current velocity of the vehicle, f(S) is the sensor policy and S is the sensor action-value vector. The algorithm continues until the convergence of the Q function, or until a certain number of episodes is reached, while also ensuring a sanity check of 15 actions. 4. Experiments Comparison of the DQN and Neuroevolutionary agents in the five GridSim scenarios. Performance comparison in regards to overall velocity error percentage and average training time of both models with different scenarios in GridSim. We observe that the velocity error of the neuroevolutionary approach is smaller in all scenarios, while keeping its training time low. 4.1. Deep reinforcement learning After several hours of training in the simulator (see training time comparison from Fig- 8), the DQN network could navigate portions of the environment, but would still drive off the road. In order to converge to a collide-free model, the DQN agent needed over 20 hours of interacting with the GridSim environment. 4.2. Deep neuroevolutionary agent The input of the neural network is a vector described by the values of the occupancy grid generated by the synthetic beams of the radar sensor model. The number of sensor beams is also configurable and can be increased to any resolution necessary. After the desired behavior was met, and the car was able to navigate the seamless generated model by itself, we performed incremental updates to the decision space. References [1] J. Kong et al., "Kinematic and dynamic vehicle models for autonomous driving control design", in Intelligent Vehicles Symposium, 2015, pp. 1094–1099. [2] S. Grigorescu, "GOL: A Semi-Parametric Approach for One-Shot Learning in Autonomous Vision", in Int. Conf. on Robotics and Automation ICRA 2018
2022-12-03 12:25:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5194445848464966, "perplexity": 1715.325993592443}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710931.81/warc/CC-MAIN-20221203111902-20221203141902-00410.warc.gz"}
https://statmd.wordpress.com/2013/06/25/how-to-choose-among-alternative-narratives/
## How to choose among alternative narratives The arithmetic of choosing among alternative narratives in the clinic and elsewhere, can be made rigorous by selecting a numerical scale for representing the extremes of falseness/impossibility all the way truthfulness/certainty. If the scale is selected as a probability one ranging from 0 to 1 and certain rules for the manipulation of probabilities as mathematical objects are employed, then one arrives at a formal inferential system. This system which allows for a logically consistent reasoning in the face of uncertainty is known as Bayesian probability theory and its modern development can be found in the texts by Keynes, Cox and Jaynes.Choosing among alternative narratives (or models of reality), is accomplished by means of a non-controversial mathematical rule known as Bayes theorem. For each narrative one forms the product of the a priori belief in the plausibility of the narrative and the compatibility of the data at hand with the narrative (“likelihood). In the medical diagnosis example we called these quantities $x_1,x_2, ... ,x_n$ and $y_1,y_2,...,y_n$) respectively, but for the sake of conformity we will use the notation $p(H_i)$ and $p(D|H_i)$ for $x_i$ and $latex$y_i\$ respectively. After forming these quantities, one calculates the a posteriori belief in each narrative by dividing each one of the $p(H_i) \times p(D|H_i)$ by the totality of the evidence  provided by the sum of all such products.  These a posteriori beliefs are also explored in a probability scale and thus too range between the extremes of falseness/impossibility to truthfulness/certainty represented by the numbers of 0 and 1 respectively. In order to choose among alternative models of the world, it suffices to select the narrative with the highest posterior probability.
2017-06-26 20:51:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8194279074668884, "perplexity": 556.3469803556443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320865.14/warc/CC-MAIN-20170626203042-20170626223042-00096.warc.gz"}
https://thebreakfastpost.com/category/work/
# MIREX 2018 submissions The 2018 edition of MIREX, the Music Information Retrieval Evaluation eXchange, was the sixth in a row for which we at the Centre for Digital Music submitted a set of Vamp audio analysis plugins for evaluation. For the third year in a row, the set of plugins we submitted was entirely unchanged — these are increasingly antique methods, but we have continued to submit them with the idea that they could provide a useful year-on-year baseline at least. It also gives me a good reason to take a look at the MIREX results and write this little summary post, although I’m a bit late with it this year, having missed the end of 2018 entirely! For reference, the past five years’ posts can be found at: 2017, 2016, 2015, 2014, and 2013. ### Structural Segmentation No results appear to have been published for this task in 2018; I don’t know why. Last time around, ours was the only entry. Maybe it was the only entry again, and since it was unchanged, there was no point in running the task. ### Multiple Fundamental Frequency Estimation and Tracking After 2017’s feast with 14 entries, 2018 is a famine with only 3, two of which were ours and the third of which (which I can’t link to, because its abstract is missing) was restricted to a single subtask, in which it got reasonable results. Results pages are here and here. ### Audio Onset Detection Almost as many entries as last time, and a new convolutional network from Axel Röbel et al disrupts the tidy sweep of Sebastian Böck’s group at the top of the results table. Our simpler methods are squarely at the bottom this time around. Röbel’s submission has a nice informative abstract which casts more light on the detailed result sets and is well worth a read. Results here. ### Audio Beat Tracking Pure consolidation: all the 2018 entries are repeats from 2017, and all perform identically (with the methods from Böck et al doing better than our plugins). Every year I say that this doesn’t feel like a solved problem, and it still doesn’t — the results we’re seeing here still don’t seem all that close to human performance, but perhaps there are misleading properties to the evaluation. Results here, here, here. ### Audio Tempo Estimation This is a busier category, with a new dataset and a few new submissions. The new dataset is most intriguing: all of the submissions perform better with the new dataset than the older one, except for our QM Tempo Tracker plugin, which performs much, much worse with the new one than the old! I believe the new dataset is of electronic dance music, so it’s likely that much of it is high tempo, perhaps tripping our plugin into half-tempo octave errors. We could probe this next time by tweaking the submission protocol a little. Submissions are asked to output two tempo estimates, and the results report whether either of them was correct. Because our plugin only produces one estimate, we lazily submit half of that estimate as our second estimate (with a much lower salience score). But if our single estimate was actually half of the “true” value, as is plausible for fast music, we would see better scores from submitting double instead of half as the second estimate. Results are here and here. ### Audio Key Detection Some novelty here from a pair of template-based methods from the Universitat Autonoma de Barcelona, one attributed to Galin and Castells-Rufas and the other to Castells-Rufas and Galin. Their performance is not a million miles away from our own template-based key estimation plugin. The strongest results appear to be from a neural network method from Korzeniowski et al at JKU, an updated version of one of last year’s better-performing submissions, an implementation of which can be found in the madmom library. Results are here. ### Audio Chord Estimation A lively (or daunting) category. A team from Fudan University in Shanghai, whence came two of the previous year’s strongest submissions, is back with another new method, an even stronger set of results, and once again a very readable abstract; and the JKU team have an updated model, just as in the key detection category, which also performs extremely impressively. Meanwhile a separate submission from JKU, due to Stefan Gasser and Franz Strasser, would have been at the very top had it been submitted a year earlier, but is now a little way behind. Convolutional neural networks are involved in all of these. Our Chordino submission can still be described as creditable. Results can be found here. # EasyMercurial v1.4 Today’s second post about a software release will be a bit less detailed than the first. I’ve just coordinated a new release of EasyMercurial, a cross-platform user interface for version control software that was previously updated in February 2013. It looks a bit like this. EasyMercurial was written with a bit of academic funding from the SoundSoftware project, which ran from 2010 to 2014. The idea was to make something as simple as possible to teach and understand, and we believed that the Mercurial version-control system was the simplest and safest to learn so we should base it on that. The concurrent rise of Github, and resulting dominance of Git as the version control software that everyone must learn, took the wind out of its sails. We eventually tacitly accepted that the v1.3 release made in 2013 was “finished”, and abandoned the proposed feature roadmap. (It’s open source, so if someone else wanted to maintain it, they could.) EasyMercurial has continued to be a nice piece of software to use, and I use it myself on many projects, so when a recent change in the protocol support at the world’s biggest public Mercurial hosting site, Bitbucket, broke the Windows version of EasyMercurial 1.3, I didn’t mind having an excuse to update it. So now we have version 1.4. This release doesn’t change a great deal. It updates the code to use the Qt5 toolkit and improves support for hi-dpi displays. I’ve dragged the packaging process up-to-date and re-packaged using current Qt, Mercurial (where bundled), and KDiff3 diff-merge code. Mercurial usage itself has moved on in most quarters since EasyMercurial was conceived. EasyMercurial assumes that you’ll be using named branches for branching development, but these days using bookmarks for lightweight branching (more akin to Git branching) is more popular — EasyMercurial shows bookmarks but can’t do anything useful with them. Other features of modern Mercurial that could have been very helpful in a simple application like this, such as phases, are not supported at all. Anyway: EasyMercurial v1.4. Free for Windows, Linux, and macOS. Get it here. # Sonic Visualiser v3.2 Another release of Sonic Visualiser is out. This one, version 3.2, has some significant visible changes, in contrast to version 3.1 which was more behind-the-scenes. The theme of this release could be said to be “oversampling” or “interpolation”. ### Waveform interpolation Ever since the Early Days, the waveform layer in Sonic Visualiser has had one major limitation: you can’t zoom in any closer (horizontally) than one pixel per sample. Here’s what that looks like — this is the closest zoom available in v3.1 or earlier: This isn’t such a big deal with a lower-resolution display, since you don’t usually want to interact with individual samples anyway (you can’t edit waveforms in Sonic Visualiser). It’s a bigger problem with hi-dpi and “retina” displays, on which individual pixels can’t always be made out. Why this limitation? It allowed an integer ratio between samples and pixels to be used internally, which made it a bit easier to avoid rounding errors. It also sidestepped any awkward decisions about how, or whether, to show a signal in between the sample points. (In a waveform editor like Audacity it is necessary to be able to interact with individual samples, so some decision has to be made about what to show between the sample points when zoomed in closely. Older versions of Audacity connected the sample points with straight lines, a decision which attracted criticism as misrepresenting how sampling works. More recent versions show sample points on separate stems without connecting lines.) In Sonic Visualiser v3.2 it’s now possible to zoom closer than one pixel per sample, and we show the signal oversampled between the sample points using sinc interpolation. Here’s an example from the documentation, showing the case where the sample values are all zero but for a single sample with value 1: The sample points are the little square dots, and the wiggly line passing through them is the interpolated signal. (The horizontal line is just the x axis.) The principle here is that, although there are infinitely many ways to join the dots, there is only one that is “smooth” enough to be expressible as a sum of sinusoids of no higher frequency than half the sampling rate — which is the prerequisite for reconstructing a signal sampled without aliasing. That’s what is shown here. The above artificial example has a nice shape, but in most cases with real music the interpolated signal will not be very different from just joining the dots with a marker. It’s mostly relevant in extreme cases. Let’s replace the single sample of value 1 above with a pair of consecutive samples of value 0.5: Now we see that the interpolated signal has a peak between the two samples with a greater level than either sample. The peak sample value is not a safe indication of the peak level of the analogue signal. Incidentally, another new feature in v3.2 is the ability to import audio data from a CSV or similar data file rather than only from standard audio formats. That made it much easier to set up the examples above. ### Spectrogram and spectrum oversampling The other oversampling-related feature added in v3.2 appears in the spectrogram and spectrum layers. These layers now have an option to set an oversampling level, from the default “1x” up to “8x”. This option increases the length of the short-time Fourier transform used to generate the spectrum, by padding the time-domain signal window with additional zero-valued samples before calculating the transform. This results in an oversampled frequency-domain output, with a higher visual resolution than would have been obtained from the original, un-zero-padded sample window. The result is a smoother spectrum in which the locations of peaks can be seen with a little more accuracy, somewhat like the waveform example above. This is nice in principle, but it can be deceiving. In the case of waveform oversampling, there can be only one “matching” signal, given the sample points we have and the constraints of the sampling theorem. So we can oversample as much as we like, and all that happens is that we approximate the analogue signal more closely. But in a short-time spectrum or spectrogram, we only use a small window of the original signal for each spectrum or spectrogram-column calculation. There is a tradeoff in the choice of window size (a longer window gives better frequency discrimination at the expense of time discrimination) but the window always exposes only a small part of the original signal, unless that signal is extremely short. Zero-padding and using a longer transform oversamples the output to make it smoother, but it obviously uses no extra information to do it — it still has no access to samples that were not in the original window. A higher-resolution output without any more information at the input can appear more effective at discriminating between frequencies than it really is. Here’s an example. The signal consists of a mixture of two sine waves one tone apart (440 and 493.9 Hz). A log-log spectrum (i.e. log frequency on x axis, log magnitude on y) with an 8192-point short-time Fourier transform looks like this: A log-log spectrum with a 1024-point STFT looks like this1: The 1024-sample input isn’t long enough to discriminate between the two frequencies — they’re close enough that it’s necessary to “hear” a longer fragment than this in order to determine that there are two frequencies at all2. Add 8x oversampling to that last example, and it looks like this: This is very smooth and looks super detailed, and indeed we can use it to read the peak value with more accuracy — but the peak is deceptive, because it is still merging the two frequency components. In fact most of the detail here consists of the frequency response of the 1024-point windowing function used to shape the time-domain window (it’s a Hann window in this case). Also, in the case of peak frequencies, Sonic Visualiser might already provide a way to get the same information more accurately — its peak-frequency identification in both spectrum and spectrogram views uses phase unwrapping instead of spectrum interpolation to estimate the frequencies of stable harmonics, which gives very good results if the sound is indeed harmonic and stable. Finally, there’s a limitation in Sonic Visualiser’s implementation of this oversampling feature that eliminates one potential use for it, which is to choose the length of the Fourier transform in order to align bin frequencies with known or expected frequency components of the signal. We can’t generally do that here, since Sonic Visualiser still only supports a few fixed multiples of a power-of-two window size. In conclusion: interesting if you know what you’re looking at, but use with caution. 1 Notice that we are connecting sample points in the spectrum with straight lines here — the same thing I characterised as a bad idea in the discussion of waveforms above. I think this is more forgivable here because the short-time transform output is not a sampled version of an original signal spectrum, but it’s still a bit icky 2 This is not exactly true, but it works for this example # Rubber Band Library v1.8.2 I have finally managed to get together all the bits that go into a release of the Rubber Band library, and so have just released version 1.8.2. The Rubber Band library is a software library for time-stretching and pitch-shifting of audio, particularly music audio. That means that it takes a recording of music and adjusts it so that it plays at a different speed or at a different pitch, and if desired, it can do that by changing the speed and pitch “live” as the music plays. This is impossible to do perfectly: essentially you are asking software to recreate what the music would have sounded like if the same musicians had played it faster, slower, or in a different key, and there just isn’t enough information in a recording to do that. It changes the sound and is absolutely not a reversible transformation. But Rubber Band does a pretty nice job. For anyone interested, I wrote a page (here) with a technical summary of how it does it. I originally wrote this library between 2005 and 2007, with a v1.0 release at the end of 2007. My aim was to provide a useful tool for open source GPL-licensed audio applications on Linux, like Ardour or Rosegarden, with a commercial license as an afterthought. As so often happens, I seriously underestimated the work involved in getting the library from “working” (a few weeks of evening and weekend coding) to ready to use in production applications (two years). It has now been almost six years since the last Rubber Band release, and since this one is just a bugfix release, we can say the library is pretty much finished. I would love to have the time and mental capacity for a version 2: there are many many things I would now do differently. (Sadly, the first thing is that I wouldn’t rely on my own ears for basic testing any more—in the intervening decade my hearing has deteriorated a lot and it amazes me to think that I used to accept it as somehow authoritative.) In spite of all the things I would change, I think this latest release of version 1 is pretty good. It’s not the state-of-the-art, but it is very effective, and is in use right now in professional audio applications across the globe. I hope it can be useful to you somehow. # Repoint: A manager for checkouts of third-party source code dependencies I’ve just tagged v1.0 of Repoint, a tool for managing library source code in a development project. Conceptually it sits somewhere between Mercurial/Git submodules and a package manager like npm. It is intended for use with languages or environments that don’t have a favoured package manager, or in situations where the dependent libraries themselves aren’t aware that they are being package-managed. Essentially, situations where you want, or need, to be a bit hands-off from any actual package manager. I use it for projects in C++ and SML among other things. Like npm, Bundler, Composer etc., Repoint refers to a project spec file that you provide that lists the libraries you want to bring in to your project directory (and which are brought in to the project directory, not installed to a central location). Like them, it creates a lock file to record the versions that were actually installed, which you can commit for repeatable builds. But unlike npm et al, all Repoint actually does is clone from the libraries’ upstream repository URLs into a subdirectory of the project directory, just as happens with submodules, and then report accurately on their status compared with their upstream repositories later The expected deployment of Repoint consists of copying the Repoint files into the project directory, committing them along with everything else, and running Repoint from there, in the manner of a configure script — so that developers generally don’t have to install it either. It’s portable and it works the same on Linux, macOS, or Windows. Things are not always quite that simple, but most of the time they’re close. At its simplest, Repoint just checks stuff out from Git or whatever for you, which doesn’t look very exciting. An example on Windows: Simple though Repoint’s basic usage is, it can run things pretty rigorously across its three supported version-control systems (git, hg, svn), it gets a lot of annoying corner cases right, and it is solid, reliable, and well-tested across platforms. The README has more documentation, including of some more advanced features. ### Is this of any use to me? Repoint might be relevant to your project if all of the following apply: • You are developing with a programming language or environment that has no obvious single answer to the “what package manager should I use?” question; and • Your code project depends on one or more external libraries that are published in source form through public version-control URLs; and • You can’t assume that a person compiling your code has those libraries installed already; and • You don’t want to copy the libraries into your own version-control repo to form a Giant Monorepo; and • Most of your dependent libraries do not similarly depend on other libraries (Repoint doesn’t support recursive dependencies at all). Beyond mere relevance, Repoint might be actively useful to your project if any of the following also apply: • The libraries you’re using are published through a mixture of version-control systems, e.g. some use Git but others Mercurial or Subversion; or • The libraries you’re using and, possibly, your own project might change from one version-control system to another at some point in the future. See the README for more caveats and general documentation. ### Example The biggest current example of a project using Repoint is Sonic Visualiser. If you check out its code from Github or from the SoundSoftware code site and run its configure script, it will call out to repoint install to get the necessary dependencies. (On platforms that don’t use the configure script, you have to run Repoint yourself.) Note that if you download a Sonic Visualiser source code tarball, there is no reference to Repoint in it and the Repoint script is never run — Repoint is very much an active-developer tool, and it includes an archive function that bundles up all the dependent libraries into a tarball so that people building or deploying the end result aren’t burdened with any additional utilities to use. I also use Repoint in various smaller projects. If you’re browsing around looking at them, note that it wasn’t originally called Repoint — its working title in earlier versions was vext and I haven’t quite finished switching the repos over. Those earlier versions work fine of course, they just use different words. # What does a convolutional neural net actually do when you run it? Convolutional neural networks (or convnets or CNNs) are a staple of “deep learning”. There are many tutorials available that describe what they do, either mathematically or via quasi-mystical appeals to intuition, and introduce how to train and use them, often with image classification examples. This post has a narrower focus. As a programmer, I am happy processing floating-point data in languages like C++, but I’m more likely to be building applications with pre-trained models than training new ones. Say I have a pre-trained convnet image classifier, and I use it to carry out a single classification of one image (“tell me whether this shows a pig, cow, or sheep”) — what does that actually mean, in terms of code? I’ll go through an example in the form of a small hand-written C++ convolutional net that contains only the “execution” code, no training logic. It will be very much an illustrative implementation, not production code. It will use pre-trained data adapted from a similar model in Keras, on which more later. The specific example I am using is a typical image classification one: identifying which of five types of flower is visible in a small colour image (inspired by this tutorial). All the code I’m describing can be found in this Github repository. ### Networks, layers, weights, training A network in this context is a pipeline of functions through which data is passed, with the output of each function going directly to the input of the next one. Each of these functions is known as a layer. Our example will begin by supplying the image data as input to the first layer, and end with the final layer returning a set of five numbers, one for each kind of flower the network knows about, indicating how likely the network thinks it is that the image shows that kind of flower. So to run one classification, we just have to get the image data into the right format and pass it through each layer function in turn, and out pops the network’s best guess. The layers themselves perform some mathematical operation on the data. Some of them do so by making use of a set of other values, provided separately, which they might multiply with the inputs in some way (in which case they are known as weights) or add to the return values (known as biases). Coming up with suitable weight and bias values for these layers is what training is for. To train a network, the weights and biases for each such layer are randomly initialised, then the network is repeatedly run to provide classifications of known inputs; each of the network’s guesses is compared with the known class of the input, and the weights and biases of the network are updated depending on how accurate the classification was. Eventually, with luck, they will converge on some reliable values. Training is both difficult and tedious, so here we will happily leave it to machine-learning researchers. These trainable layers are then typically sandwiched by fixed layers that provide non-linearities (also known as activation functions) and various data-manipulation bits and bobs like max-pooling. More on these later. The layers are pure functions: a layer with a given set of weights and biases will always produce the same output for a given input. This is not always true within recurrent networks, which model time-varying data by maintaining state in the layers, but it is true of classic convolutional nets. ### Our example network This small network consists of four rounds of convolution + max-pooling layers, then two dense layers. I constructed and trained this using Keras (in Python) based on a “flower pictures” dataset downloaded from the Tensorflow site, with 5 labelled classes of flowers. My Github repository contains a script obtain-data.sh which downloads and prepares the images, and a Python program with-keras/train.py which trains this network on them and exports the trained weights into a C++ file. The network isn’t a terribly good classifier, but that doesn’t bother me here. I then re-implemented the model in C++ without using a machine-learning library. Here’s what the pipeline looks like. This can be found in the file flower.cpp in the repository. All of the functions called here are layer functions defined elsewhere in the same file, which I’ll go through in a moment. The variables with names beginning weights_ or biases_ are the trained values exported from Keras. Their definitions can be found in weights.cpp. vector<float> classify(const vector<vector<vector<float>>> &image) { vector<vector<vector<float>>> t; t = convolve(t, weights_firstConv, biases_firstConv); t = activation(t, "relu"); t = maxPool(t, 2, 2); t = convolve(t, weights_secondConv, biases_secondConv); t = activation(t, "relu"); t = maxPool(t, 2, 2); t = convolve(t, weights_thirdConv, biases_thirdConv); t = activation(t, "relu"); t = maxPool(t, 2, 2); t = convolve(t, weights_fourthConv, biases_fourthConv); t = activation(t, "relu"); t = maxPool(t, 2, 2); vector<float> flat = flatten(t); flat = dense(flat, weights_firstDense, biases_firstDense); flat = activation(flat, "relu"); flat = dense(flat, weights_labeller, biases_labeller); flat = activation(flat, "softmax"); return flat; } Many functions are used for more than one layer — for example we have a single convolve function used for four different layers, with different weights, bias values, and input shapes each time. This reuse is possible because layer functions don’t retain any state from one call to the next. You can see that we pass the original image as input to the first layer, but subsequently we just take the return value from each function and pass it to the next one. These values have types that we are expressing as nested vectors of floats. They are all varieties of tensor, on which let me digress for a moment: ### Tensors For our purposes a tensor is a multidimensional array of numbers, in our case floats, of which vectors and matrices are special cases. A tensor has a shape, which we can write as a list of sizes, and the length of that list is the rank. (The word dimension can be ambiguous here and I’m going to try to avoid it… apart from that one time I used it a moment ago…) Examples: • A matrix is a rank-2 tensor whose shape has two values, height and width. The matrix $\begin{bmatrix}1.0&3.0&5.0\\2.0&4.0&6.0\end{bmatrix}$ has shape [2, 3], if we are giving the height first. • A C++ vector of floats is a rank-1 tensor, whose shape is a list of one value, the number of elements in the vector. The vector { 1.0, 2.0, 3.0 } has the shape [3]. • A single number can also be considered a tensor, of rank 0, with shape []. All of the values passed to and returned from our network layer functions are tensors of various shapes. For example, a colour image will be represented as a tensor of rank 3, as it has height, width, and a number of colour channels. In this code, we are storing tensors using C++ vectors (for rank 1), vectors of vectors (for rank 2), and so on. This is transparent, makes indexing easy, and allows us to see the rank of each tensor directly from its type. Production C++ frameworks don’t do it this way — they typically store tensors as values interleaved into a single big array, wrapped up in a tensor class that knows how to index into it. With such a design, unlike our code, all tensors will have the same C++ class type regardless of their rank. There is a practical issue with the ordering of tensor indices. Knowing the shape of a tensor is not enough: we also have to know which index is which. For example, an RGB image that is 128 pixels square has shape [128, 128, 3] if we index the height, width, and colour channels in that order, or [3, 128, 128] if we separate out the individual colour channels and index them first. Unfortunately, both layouts are in common use. Keras exposes the choice through a flag and uses the names channels_last for the former layout, historically the default in Tensorflow as it often runs faster on CPUs, and channels_first for the latter, used by many systems as it parallelises better with GPUs. The code in this example uses channels_last ordering. This definition of a tensor is good enough for us, but in mathematics a tensor is not just an array of numbers but an object that sits in an algebraic space and can be manipulated in ways intrinsic to that space. Our data structures make no attempt to capture this, just as a C++ vector makes no effort to capture the properties of a mathematical vector space. ### Layers used in our model Each layer function takes a tensor as input, which I refer to within the function as in. Trained layers also take further tensors containing the weights and biases for the layer. #### Convolution layer Here’s the nub of this one, implemented in the convolve function. for (size_t k = 0; k < nkernels; ++k) { for (size_t y = 0; y < out_height; ++y) { for (size_t x = 0; x < out_width; ++x) { for (size_t c = 0; c < depth; ++c) { for (size_t ky = 0; ky < kernel_height; ++ky) { for (size_t kx = 0; kx < kernel_width; ++kx) { out[y][x][k] += weights[ky][kx][c][k] * in[y + ky][x + kx][c]; } } } out[y][x][k] += biases[k]; } } } Seven nested for-loops! That’s a lot of brute force. The weights for this layer, supplied in a rank-4 tensor, define a set of convolution kernels (or filters). Each kernel is a rank-3 tensor, width-height-depth. For each pixel in the input, and for each channel in the colour depth, the kernel values for that channel are multiplied by the surrounding pixel values, depending on the width and height of the kernel, and summed into a single value which appears at the same location in the output. We therefore get an output tensor containing a matrix of (almost) the same size as the input image, for each of the kernels. (Note for audio programmers: FFT-based fast convolution is not generally used here, as it works out slower for small kernels) Vaguely speaking, this has the effect of transforming the input into a space determined by how much each pixel’s surroundings have in common with each of the learned kernel patterns, which presumably capture things like whether pixels have a common colour or whether there is common activity between a pixel and neighbouring pixels in a particular direction due to edges or lines in the image. Usually the early convolution layers in an image classifier will explode the amount of data being passed through the network. In our small model, the first convolution layer takes in a 128×128 image with 3 colour channels, and returns a 128×128 output for each of 32 learned kernels: nearly ten times as much data, which will then be reduced gradually through later layers. But fewer weight values are needed to describe a convolution layer compared with the dense layers we’ll see later in our network, so the size of the trained layer (what you might call the “code size”) is relatively smaller. I said above that the output matrices were “almost” the same size as the input. They have to be a bit smaller, as otherwise the kernel would overrun the edges. To allow for this, we precede the convolution layer with a… This can be found in the zeroPad function. It creates a new tensor filled with zeros and copies the input into it: for (size_t y = 0; y < in_height; ++y) { for (size_t x = 0; x < in_width; ++x) { for (size_t c = 0; c < depth; ++c) { } } } All this does is add a zero-valued border around all four edges of the image, in each channel, in order to make the image sufficiently bigger to allow for the extent of the convolution kernel. Most real-world implementations of convolution layers have an option to pad the input when they do the convolution, avoiding the need for an explicit zero-padding step. In Keras for example the convolution layers have a padding parameter that can be either valid (no extra padding) or same (provide padding so the output and input sizes match). But I’m trying to keep the individual layer functions minimal, so I’m happy to separate this out. #### Activation layer Again, this is something often carried out in the trained layers themselves, but I’ve kept it separate to simplify those. This introduces a non-linearity by applying some simple mathematical function to each value (independently) in the tensor passed to it. Historically, for networks without convolution layers, the activation function has usually been some kind of sigmoid function mapping a linear input onto an S-shaped curve, retaining small differences in a critical area of the input range and flattening them out elsewhere. The activation function following a convolution layer is more likely to be a simple rectifier. This gives us the most disappointing acronym in all of machine learning: ReLU, which stands for “rectified linear unit” and means nothing more exciting than if (x < 0.0) { x = 0.0; } applied to each value in the tensor. We have a different activation function right at the end of the network: softmax. This is a normalising function used to produce final classification estimates that resemble probabilities summing to 1: float sum = 0.f; for (size_t i = 0; i < sz; ++i) { out[i] = exp(out[i]); sum += out[i]; } if (sum != 0.f) { for (size_t i = 0; i < sz; ++i) { out[i] /= sum; } } #### Max pooling layer The initial convolution layer increases the amount of data in the network, and subsequent layers then reduce this into a smaller number of hopefully more meaningful higher-level values. Max pooling is one way this reduction happens. It reduces the resolution of its input in the height and width axes, by taking only the maximum of each NxM block of pixels, for some N and M which in our network are both always 2. From the function maxPool: for (size_t y = 0; y < out_height; ++y) { for (size_t x = 0; x < out_width; ++x) { for (size_t i = 0; i < pool_y; ++i) { for (size_t j = 0; j < pool_x; ++j) { for (size_t c = 0; c < depth; ++c) { float value = in[y * pool_y + i][x * pool_x + j][c]; out[y][x][c] = max(out[y][x][c], value); } } } } } #### Flatten layer Interleaves a rank-3 tensor into a rank-1 tensor, otherwise known as a single vector. for (size_t y = 0; y < height; ++y) { for (size_t x = 0; x < width; ++x) { for (size_t c = 0; c < depth; ++c) { out[i++] = in[y][x][c]; } } } Why? Because that’s what the following dense layer expects as its input. By this point we are saying that we have used as much of the structure in the input as we’re going to use, to produce a set of values that we now treat as individual features rather than as a structure. With a library that represents tensors using an interleaved array, this will be a no-op, apart from changing the nominal rank of the tensor. #### Dense layer A dense or fully-connected layer is an old-school neural network layer. It consists of a single matrix multiplication, multiplying the input vector by the matrix of learned weights, then a vector addition of bias values. From our dense function: vector<float> out(out_size, 0.f); for (size_t i = 0; i < in_size; ++i) { for (size_t j = 0; j < out_size; ++j) { out[j] += weights[i][j] * in[i]; } } for (size_t j = 0; j < out_size; ++j) { out[j] += biases[j]; } Our network has two of these layers, the second of which produces the final prediction values. That’s pretty much it for the code. There is a bit of boilerplate around each function, and some extra code to load in an image file using libpng, but that’s about all. There is one other kind of layer that appears in the original Keras model: a dropout layer. This doesn’t appear in our code because it is only used while training. It discards a certain proportion of the values passed to it as input, something that can help to make the following layer more robust to error when trained. ## Further notes ### Precision and repeatability Training a network is a process that involves randomness. A given model architecture can produce quite different results on separate training runs. This is not the case when running a trained network to make a classification or prediction. Two implementations of a network that both use the same basic arithmetic datatypes (e.g. 32-bit floats) should produce the same results when given the same set of trained weights and biases. If you are trying to reproduce a network’s behaviour using a different library or framework, and you have the same trained weights to use with both implementations, and your results look only “sort of” similar, then they’re probably wrong. The differences between implementations in terms of channel ordering and indexing can be problematic in this respect. Consider the convolution function with its seven nested for-loops. Load any of those weights with the wrong index or in the wrong order and of course it won’t work, and there are a lot of possible permutations there. There seems to be more than one opinion about how kernel weights should be indexed, for example, and it’s easy to get them upside-down if you’re converting from one framework to another. ### Should I use code like this in production? Probably not! First and most trivially, this isn’t a very efficient arrangement of layers. Separating out the zero-padding and activation functions means more memory allocation and copying. Second, there are many ways to speed up the expensive brute-force layers (that is, the convolution and dense layers) dramatically even on the CPU, using vectorisation, careful cache and memory management, and faster matrix and convolution algorithms. If you don’t take advantage of these, you’re wasting time for no purpose; if you do, you’re repeating work other people are doing in libraries already. I adapted the example network to use the tiny-dnn header-only C++ library, and it ran roughly twice as fast as the code above. (I’m a sucker for libraries named tiny-something, even if, as in this case, they are no longer all that tiny.) That version of the example code can be found in the with-tiny-dnn directory in the repo. It’s ugly, because I had to load and convert the weights from a system that uses channels-last format into one that expects channels-first. But I only had to write that code once, and it wouldn’t have been necessary if I’d trained the model in a channels-first layout. I might also have overlooked some simpler way to do it. Third, this approach is fragile in the face of changes to the model architecture, which have to be duplicated exactly across the model implementation and the separately-loaded weights. It would be preferable to load the model architecture into your program, rather than load the weights into a hand-written architecture. To this end it seems to make sense to use a library that supports importing and exporting models automatically. All this said, I find it highly liberating to realise that one could just sketch out these few lines of code and have a working result.
2019-03-20 11:32:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4391280710697174, "perplexity": 1294.7249582185455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202326.46/warc/CC-MAIN-20190320105319-20190320131319-00447.warc.gz"}
http://mathhelpforum.com/pre-calculus/19752-vectors-2d-space-again.html
# Math Help - vectors in 2D space again 1. ## vectors in 2D space again question: With respect to an origin, O, the points A, B and C have position vectors (2i+5j)m, (-i+2j)m and (7i+15j)m respectively. (a) Express vector BA, in terms of i and j. (b) Given that P is the point on BA, between B and A, such that BP/PA =1/2 Write down vector BP, in terms of i and j (c) Hence, or otherwise, find vector OP, in terms of i and j. my ans: (a) -3m(i+j) (b) -m(i+j) (c) 3mj book's ans: (a) 3i+3j (b) i+j (c) 3j i have no idea where all the 'm' goes 2. Originally Posted by afeasfaerw23231233 question: With respect to an origin, O, the points A, B and C have position vectors (2i+5j)m, (-i+2j)m and (7i+15j)m respectively. (a) Express vector BA, in terms of i and j. (b) Given that P is the point on BA, between B and A, such that BP/PA =1/2 Write down vector BP, in terms of i and j (c) Hence, or otherwise, find vector OP, in terms of i and j. my ans: (a) -3m(i+j) (b) -m(i+j) (c) 3mj book's ans: (a) 3i+3j (b) i+j (c) 3j i have no idea where all the 'm' goes The "m" is probably just a distance unit: a meter. You have the correct answers for a) and b), the book didn't print (I presume) the negative sign. -Dan 3. oops, thanks! $\overrightarrow{OA}=2\overrightarrow{i}+5\overrigh tarrow{j}, \ \overrightarrow{OB}=-\overrightarrow{i}+2\overrightarrow{j}$ Then $\overrightarrow{BA}=\overrightarrow{OA}-\overrightarrow{OB}=2\overrightarrow{i}+5\overrigh tarrow{j}+\overrightarrow{i}-2\overrightarrow{j}=3\overrightarrow{i}+3\overrigh tarrow{j}$
2014-04-17 06:05:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7577612996101379, "perplexity": 3404.8129964677655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/why-is-this-happening-trig-subs.205172/
# Why is this happening (trig subs) 1. Dec 18, 2007 ### frasifrasi Dor the integral 1/x*sqrt(t^2 - 1) After i do the substitutions, it comes out to be sec(x)tan(x)/sec(x)tan(x), so evertything cancels out and the integral evaluated is just x (or theta), but what do I sub theta for when evaluating the integral? I know this is supposed to be arcsec, but I am not visualizing how I get to that point. Thank you. 2. Dec 18, 2007 ### rocomath two variables ... which variable are you integrating with respects to? 3. Dec 18, 2007 ### frasifrasi theta... I guess I have to change the limits of integration. 4. Dec 18, 2007 ### rocomath 5. Dec 18, 2007 ### frasifrasi yes, letting x= sec$\theta$ If i let sqrt(2) = sec$\theta$ , I am looking for arcsec(sqrt(2)) , which is pi/ 4 since cos (pi/4) = 1/sqrt(2), so I would convert the limits. I gorgot that was required since we are mostly doing indefinite integrals. 6. Dec 18, 2007 ### HallsofIvy Staff Emeritus No that wasn't the question- your original integral involves both x and t. Was that an typo? Did you mean dx/(x*sqrt(x^2-1)) or dt/(t*sqrt(t^2-1))? 7. Dec 18, 2007 was typo.
2017-11-24 07:48:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8197624683380127, "perplexity": 4260.802744843251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807146.16/warc/CC-MAIN-20171124070019-20171124090019-00546.warc.gz"}
https://solvedlib.com/n/9-kwk-ch-ex-129-applying-the-ideal-gas-law3allempts-ie-chcck,19035248
# 9 Kwk Ch: Ex: 129 Applying the Ideal Gas Law3allempts Iechcck my Work.Owged can be produced by the thermul decomposition ###### Question: 9 Kwk Ch: Ex: 129 Applying the Ideal Gas Law 3 allempts Ie chcck my Work. Owged can be produced by the thermul decomposition muercuric oxidc: eiu Hg0l) He(h) O.1 Gulded What rolume ofo: produced a 30.79€ and 0.94S Jtm by thc decomposition of 28.8 E of Hgo? tenc 8i6 LO; #### Similar Solved Questions ##### A car company in La Union charges Php 35 for the first kilometres additional kilometre, how much will a passenger pay if he will trave km? 2, When a 40 gram mass was suspended from a coil spring; the le When an 80 gram mass was suspended from the same coil sprir spring was 36 inches What is the length of the coil if the mass is 60 3, A manager quotes a price for a banquet including the cost of a people costs Php62,400 and for 120 people , the price is Php 105, 6 cost of a banquet if there are 20 A car company in La Union charges Php 35 for the first kilometres additional kilometre, how much will a passenger pay if he will trave km? 2, When a 40 gram mass was suspended from a coil spring; the le When an 80 gram mass was suspended from the same coil sprir spring was 36 inches What is the leng... ##### 3. Photons are collected at a sensor according to a Poisson process with rate Y_ Let X be the number of photons collected in one hour and XIY = y Poisson(y)_ The rate is determined by a gamma distribution where Y gamma(a, B).What is E(XIY = y)? Use iterated expectations to find E(X) Use iterated variance to find Var(X) What is Cov(X, Y)? (Hint: iterated expectations will help you find EXY) e) Find E(2X + 6Y)_ (f) Find Var(2X + 6Y ). 3. Photons are collected at a sensor according to a Poisson process with rate Y_ Let X be the number of photons collected in one hour and XIY = y Poisson(y)_ The rate is determined by a gamma distribution where Y gamma(a, B). What is E(XIY = y)? Use iterated expectations to find E(X) Use iterated va... ##### Identify the followingdirectional terms on the picture or model;Superior PosteriorDistalInferior VentralProximalCranial DorsalMedialCaudal IpsilateralLateralAnterior Contralateral Identify the following directional terms on the picture or model; Superior Posterior Distal Inferior Ventral Proximal Cranial Dorsal Medial Caudal Ipsilateral Lateral Anterior Contralateral... ##### P4.4 An air-conditioning unit, operating on a reversed Carnot cycle, is used to keep a house... P4.4 An air-conditioning unit, operating on a reversed Carnot cycle, is used to keep a house at 20°C in an environment at 45°C. The electric power input to the unit is 3.5 kW. Determine the COP of the air conditioning unit, and the rate of cooling in kW and in Btu/h. P4.5_F U Cooling P4.6 . ... ##### Chapter 21, Question 14Identify the product of each of the following decay processes Write the correct lemental symbols for each of these nuclides_ including mass number: Enter element symbol followed by hyphen and then the mass number; i.e, enter -H as H-152fe emits positron andray:LINK To TEXT55Fe captures an electron:LINK TO TEXT59Fe undergoes and decay. Chapter 21, Question 14 Identify the product of each of the following decay processes Write the correct lemental symbols for each of these nuclides_ including mass number: Enter element symbol followed by hyphen and then the mass number; i.e, enter -H as H-1 52fe emits positron and ray: LINK To TEXT... ##### A physicist at a fireworks display times the lag between seeing an explosion and hearing its... A physicist at a fireworks display times the lag between seeing an explosion and hearing its sound, and finds it to be 0.300 s. (Enter your answers to at least four decimal places.) (a) How far away (in m) is the explosion if air temperature is 19.0°C and if you neglect the time taken for light ... ##### C++ Write a program that computes and displays the charges for a patient's hospital stay_First, the... C++ Write a program that computes and displays the charges for a patient's hospital stay_First, the program should ask if the patient was admitted as an inpatient or an outpatient. If the patient was an inpatient, the following data should be entered: The number of days spent in the hospital ... ##### The J4 thnu V Teracuttullict 18 length of [ 1 Va enters [ recorded: Fill thety nc qumner bunk (1 points) 1 cuch (4 points) 1 V V Inble 1 Space thal U bank ucCuI 0l curthquakes 1 AnY Let Hi Let ; distribution V 1 Batceotcd Variance Ju Jutllttu the J4 thnu V Teracuttullict 18 length of [ 1 Va enters [ recorded: Fill thety nc qumner bunk (1 points) 1 cuch (4 points) 1 V V Inble 1 Space thal U bank ucCuI 0l curthquakes 1 AnY Let Hi Let ; distribution V 1 Batceotcd Variance Ju Jutllttu... ##### Proof that that total area, area of the small shape as well as the area large shape of the following two polar functions are the same:R;= V3 + 2cos 0 Rz= WB _ 2sin 0 Proof that that total area, area of the small shape as well as the area large shape of the following two polar functions are the same: R;= V3 + 2cos 0 Rz= WB _ 2sin 0... ##### Question5 ptsAresearcher took a sample of size 25 and found that the sample mean is 186 and the sample standard deviation is 12. Assume that the population is normally distributed Find the upper limit for 95% confidence interval for population mean Question 5 pts Aresearcher took a sample of size 25 and found that the sample mean is 186 and the sample standard deviation is 12. Assume that the population is normally distributed Find the upper limit for 95% confidence interval for population mean... ##### QuestiohJpolt @4r EudlnExplain how the human blood environment acts as buffer with dissolved CO2/g). Paragtaph J (120u3R questioh Jpolt @4r Eudln Explain how the human blood environment acts as buffer with dissolved CO2/g). Paragtaph J (120u 3R... ##### Question YR 0 Income Statement YR 0 Balence Sheet Transactions Transaction Analysis YR 1 Income Statement... Question YR 0 Income Statement YR 0 Balence Sheet Transactions Transaction Analysis YR 1 Income Statement YR 1 Balance Sheet (In using the transaction analysis sheet, link the beginning balances in the top row to the Year 0 ending balences in the balance sheet. Be sure to calculate bala... ##### Ueo tho graph shom ! (a) Tha domain find the folowing: (b) Tho ronge " Inieroeota 0 the tuncto Honorhu Vottcul "symptoles, any Oblnue = asymptolos, neymnptotos,NOFEWhnt [ tho domatn? Soledt the corect choko bdorany ansier boxos within your chokca domain &l tte (Type funcbon is {* Inoqunlty: Use (teon Ueo tho graph shom ! (a) Tha domain find the folowing: (b) Tho ronge " Inieroeota 0 the tuncto Honorhu Vottcul "symptoles, any Oblnue = asymptolos, neymnptotos, NOFE Whnt [ tho domatn? Soledt the corect choko bdor any ansier boxos within your chokca domain &l tte (Type funcbon is {*... ##### 0 Linean Duanonsnomolitie Finding side Iength given te petimeter end sldz lerigthis MithMeteof the Lectangle below Is 126 unlts Fird the value ofj: 0 Linean Duanonsnomolitie Finding side Iength given te petimeter end sldz lerigthis Mith Mete of the Lectangle below Is 126 unlts Fird the value ofj:... ##### Pont) Tho mainx'[: 3 Find uidcmectorshas eigenvaluosThe elgenvalu?nas ascciatod 0 zonvoctorTha e genvaluena5 855ociated BiqervecloTho agonvalue0550 Clu(uo qunvucio Pont) Tho mainx '[: 3 Find uidcmectors has eigenvaluos The elgenvalu? nas ascciatod 0 zonvoctor Tha e genvalue na5 855ociated Biqerveclo Tho agonvalue 0550 Clu(uo qunvucio... ##### Container full of water is resting on the floor of a lift: When the lift is ascending with an acceleration of 2 mls?;, calculate the pressure difference between two points in a water separated by a vertical distance 0.25 m_ 2952 Pa 848 Pa1952 Pa3254 Pa4905 Pa container full of water is resting on the floor of a lift: When the lift is ascending with an acceleration of 2 mls?;, calculate the pressure difference between two points in a water separated by a vertical distance 0.25 m_ 2952 Pa 848 Pa 1952 Pa 3254 Pa 4905 Pa... ##### What is the amount of matter an object contains? Mass Density Weight Internal force Which of... What is the amount of matter an object contains? Mass Density Weight Internal force Which of the following uses a gyroscope Ship Plane Bike All of the above None of the above Conservation of energy state Energy can do what ever it f... ##### Part AWhal causes calastrophe of Ine microtubule vitro?GTP hydrolysisnon-motor Microtubule Associated Proteins (MAPs)the lack of tubulin heterodimersmutation of the B-tubulinSubmitRequest AnswerPart BWhal is tne role of GTP in microtubule polymerization?GTPsecond messenger that signalneed for polymerizationide-polymerizationGTP binds the alpha and beta tubulin subunits together to form the tubulin monomerGTP hydrolysis provides the energy for the polymerization of the microtubuleGTP stabilizes t Part A Whal causes calastrophe of Ine microtubule vitro? GTP hydrolysis non-motor Microtubule Associated Proteins (MAPs) the lack of tubulin heterodimers mutation of the B-tubulin Submit Request Answer Part B Whal is tne role of GTP in microtubule polymerization? GTP second messenger that signal nee... ##### Hello I need help turning my code into a GUI applicaiton please. What I have for... Hello I need help turning my code into a GUI applicaiton please. What I have for code and the requirements are below. Please please help, I will be so greateful. This is my last assignment of my school and I'm just so burnt out mentally that I don't know where to start and my teacher is alwa... ##### This is out of chapter 16 dudek's nutrition for nursing essentials learning objectives. I am looking... This is out of chapter 16 dudek's nutrition for nursing essentials learning objectives. I am looking for questions 2, 4, 5, & 6 2 Explain why enteral nutrition, when feasible, is a superior to parenteral nutrition in patients who are critically ill. 4 Discuss the cause and signs of refeeding... ##### 5. [20 pts:] Name the following compounds: KzsNazOIO:d. N;PClzNiofKN& Nalh I9OsjCINioPO} 5. [20 pts:] Name the following compounds: Kzs NazO IO: d. N;P ClzNio fKN & Nal h I9Os jCINio PO}... ##### 7 (-1)" n3 n=] 3" 7 (-1)" n3 n=] 3"... ##### OUESTİON4112MARKS) Record companies, faced with the growing competitions from digital music download services lower the price... OUESTİON4112MARKS) Record companies, faced with the growing competitions from digital music download services lower the price of a music CD from $18.00 to$13.50. The price of a DVD is $18. Olivias income is$216 a year and she spends all of it on music CDs and movies on DVDs a) Calculate the e... ##### Rationalize the following observations. (a) The following compound does not undergo $\mathrm{S}_{\mathrm{N}} 1$ reactions. (b) Reaction between $\mathrm{MeCl}$ and $\mathrm{KO}^{\mathrm{t}} \mathrm{Bu}$ is a good route to the following ether:but reaction between 'BuCl and KOMe is a poor route. (c) Reaction of 2-chloro-2-methylbutane with EtO $^{-}$ gives a mixture of two alkenes, but one predominates over the other. (d) Hydrolysis of $(R)-3$ -chloro-3-methylheptane gives a mixture of $(R)-$ Rationalize the following observations. (a) The following compound does not undergo $\mathrm{S}_{\mathrm{N}} 1$ reactions. (b) Reaction between $\mathrm{MeCl}$ and $\mathrm{KO}^{\mathrm{t}} \mathrm{Bu}$ is a good route to the following ether:but reaction between 'BuCl and KOMe is a poor route. ... ##### Fre-Lab Qucstions Titration Cune (Strong Acid + , Strong Hase}Construct 4 theoretical titration curve for Ihe ncutralization 0f 4 25,KI mL aliquot of Q,100 M HCI ith I0O MNuOll zlution_Use the following sugeested NaOH volumes calculatc the conresponling pH values for your and Fomipi ete the table:Volume of NaOH added Total volume of solution (mL) (ucid udded base) (mL)[H"I(M)Q,0O10.0020.0024.0024.9025.0025.1030.0040.0045,0050.00Note: Please plot the theorctical curve of pH vs volume of NaO Fre-Lab Qucstions Titration Cune (Strong Acid + , Strong Hase} Construct 4 theoretical titration curve for Ihe ncutralization 0f 4 25,KI mL aliquot of Q,100 M HCI ith I0O MNuOll zlution_ Use the following sugeested NaOH volumes calculatc the conresponling pH values for your and Fomipi ete the table... ##### A) and b) (a) The simplest quantum mechanical model for describing electrical conduction in a metal... a) and b) (a) The simplest quantum mechanical model for describing electrical conduction in a metal is the free electron gas in three dimensions. The density of states D(E) is given as: V 2m D(E)- 277 An estimate of the average electron energy can be obtained using the following expression: SEN(E)d... ##### Because of the Affordable Care Act (Obamacare), why should people get enrolled on a plan sooner... Because of the Affordable Care Act (Obamacare), why should people get enrolled on a plan sooner rather than later? What should a care coordinator know to better help uninsured patients' access essential health services?... ##### 3. Let x, and yn be the Fourier series coefficients of (t) and y(t), respectively Assuming the period of r(t) is To. express y, in terms of (a) y(t)-x(at), where a 0. dt 3. Let x, and yn be... 3. Let x, and yn be the Fourier series coefficients of (t) and y(t), respectively Assuming the period of r(t) is To. express y, in terms of (a) y(t)-x(at), where a 0. dt 3. Let x, and yn be the Fourier series coefficients of (t) and y(t), respectively Assuming the period of r(t) is To. express y, i... ##### What is the Coase Theorem? What does this imply about the necessity of government corrections to... What is the Coase Theorem? What does this imply about the necessity of government corrections to markets that suffer from an externality problem? When will the theorem fail?... ##### Good day, can someone please assist me... I want to calculate an existing conveyer belt and coal chutes trajectory and b... Good day, can someone please assist me... I want to calculate an existing conveyer belt and coal chutes trajectory and belt speed?basically I want to calculate the efficiency / efficictiveness of the conveyer belt and coal chute. But I don't know where to start or what information I need to do t... ##### Question 23 (2 points) The graph below shows the average total cost and marginal cost curves... Question 23 (2 points) The graph below shows the average total cost and marginal cost curves of a perfectly competitive firm. If the market price is $7, what is the output level that maximizes the firm's profit? 12 11 10 MC ATC 9 8 Price$/Q S 4 3 2 0 1 2 3 6 7 8 9 10 11 12 13 14 15 16 Quantity ... ##### 4 only please ZOOM + 3. Suppose that Firm 1 and Firm 2 are Cournot competitors.... 4 only please ZOOM + 3. Suppose that Firm 1 and Firm 2 are Cournot competitors. Each still has a marginal cost of 2 and the inverse demand curve is given by: P 5 - .5Q. Calculate the equilibrium price and profit for both firms. Calculate the HHI 4. For the two firms in the previous question, suppos... ##### Si4oo eunoratr (@CrMAeWhamouni 4lic Incne_rdny Incroas b equal 34657 Sn[ Ko aulcur ola HST on Ihc TcUilpri<} L S28LJ7 Whalt (ha HI4u Miec &l te cuule 4K HD Saun TV, Lx luusl cust to Ihc (inchudine ) - OSTA4 7 ~ PST ) Gamc @ GSTund bow rch PST did cunlorud PY Dralu 6 95000, ILLJ : bigher tan Yczrs = cantrs Kenecdolr e n J0x popublon Ialrc Jot 1008' Koth-inmcn Wchk DIICE ofSIs.10 in Ycar e0d 11450 bamincrs 3 an IvcraYe price & HCCC Umi Ullc [rrccnt chunec ((0m Yejt Nent *na S1s50 si4oo e unoratr ( @CrMAe Whamouni 4lic Incne_rdny Incroas b equal 34657 Sn[ Ko aulcur ola HST on Ihc TcUilpri<} L S28LJ7 Whalt (ha HI4u Miec &l te cuule 4K HD Saun TV, Lx luusl cust to Ihc (inchudine ) - OSTA4 7 ~ PST ) Gamc @ GSTund bow rch PST did cunlorud PY Dralu 6 95000, ILLJ : bigher ta... ##### 10.48YouAII Media12/09/2020, 10.48 AMExcicise Let Mn(K) tne set 0f the square matux & orcn nroeccidatts nz(- GCk And S M) 45 Ortdn Iu = M-(r); 'Kemk (Kemu {Me Ha(k); Ms o}.F=t ce}Prort hare vocio J sudspjce Ot_AKL &oa thal F stablg Ior Ihe mumplcaton Ectabishithat GLk) Ale F 10.48 You AII Media 12/09/2020, 10.48 AM Excicise Let Mn(K) tne set 0f the square matux & orcn nroeccidatts nz(- GCk And S M) 45 Ortdn Iu = M-(r); 'Kemk (Kemu {Me Ha(k); Ms o}.F=t #ce} Prort hare vocio J sudspjce Ot_AKL &oa thal F stablg Ior Ihe mumplcaton Ectabishithat GLk) Ale F...
2023-03-27 19:02:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4870705306529999, "perplexity": 11053.551718108616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948684.19/warc/CC-MAIN-20230327185741-20230327215741-00760.warc.gz"}
https://socratic.org/questions/how-do-you-write-an-equation-of-a-line-that-goes-through-6-7-with-m-1
# How do you write an equation of a line that goes through (-6,-7) with m= -1? ##### 1 Answer Apr 30, 2015 The equation of a line given a point and a slope is: $y - {y}_{p} = m \left(x - {x}_{p}\right)$, so: $y + 7 = - 1 \left(x + 6\right) \Rightarrow y = - x - 6 - 7 \Rightarrow y = - x - 13$.
2020-03-30 17:11:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6917504668235779, "perplexity": 599.2797651606996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497171.9/warc/CC-MAIN-20200330150913-20200330180913-00006.warc.gz"}
https://www.transtutors.com/questions/week-5-quiz-question-1-the-net-present-value-is-found-by-subtracting-a-project-s-ini-3169389.htm
Week 5 quiz Question 1 The net present value is found by subtracting a project’s initial investment. Week 5 quiz Question 1 The net present value is found by subtracting a project’s initial investment from the present value of its cash inflows discounted at a rate equal to the project’s internal rate of return. Question 2 In capital budgeting, the preferred approaches in assessing whether a project is acceptable are those that integrate time value procedures, risk and return considerations, and valuation concepts. Question 3 The ________ measures the amount of time it takes the firm to recover its initial investment. Question 4 In the case of annuity cash inflows, the payback period can be found by dividing the initial investment by the annual cash inflow. Question 5 If the projects have five-year lives, the range of the net present value for Project B is approximately ________. (See Table 10.1.) Question 6 Some firms use the payback period as a decision criterion or as a supplement to sophisticated decision techniques, because Question 7 Should Tangshan Mining company accept a new project if its maximum payback is 3.25 years and its initial after tax cost is $5,000,000 and it is expected to provide after-tax operating cash inflows of$1,800,000 in year 1, $1,900,000 in year 2,$700,000 in year 3 and $1,800,000 in year 4? Question 8 The NPV of an project with an initial investment of$1,000 that provides after-tax operating cash flows of $300 per year for four years where the firm’s cost of capital is 15 percent is$856.49. Question 9 The objective of ________ is to select the group of projects that provides the highest overall net present value and does not require more dollars than are budgeted. Question 10 Which of the following capital budgeting techniques ignores the time value of money? Question 11 A sophisticated capital budgeting technique that can be computed by solving for the discount rate that equates the present value of a projects inflows with the present value of its outflows is called net present value. Question 12 The ________ reflects the return that must be earned on the given project to compensate the firm’s owners adequately according to the project’s variability of cash flows. Question 13 If a project’s payback period is greater than the maximum acceptable payback period, we would accept it. Question 14 The internal rate of return (IRR) is defined as the discount rate that equates the net present value with the initial investment associated with a project. Question 15 Many firms use the payback method as a guideline in capital investment decisions. Reasons they do so include all of the following EXCEPT Question 16 The expected net present value of project A if the outcomes are equally probable and the project has five-year life is ________. (See Table 10.1.) Question 17 Examples of sophisticated capital budgeting techniques include all of the following EXCEPT Question 18 One type of simulation program made popular by the widespread use of personal computers is called Question 19 Consider the following projects, X and Y, where the firm can only choose one. Project X costs $600 and has cash flows of$400 in each of the next 2 years. Project Y also costs $600, and generates cash flows of$500 and $275 for the next 2 years, respectively. Which investment should the firm choose if the cost of capital is 10 percent? Question 20 The payback period of a project that costs$1,000 initially and promises after-tax cash inflows of $300 each year for the next three years is 0.333 years. Question 21 What is the IRR for the following project if its initial after tax cost is$5,000,000 and it is expected to provide after-tax operating cash flows of ($1,800,000) in year 1,$2,900,000 in year 2, $2,700,000 in year 3 and$2,300,000 in year 4? Question 22 The IRR is the compound annual rate of return that the firm will earn if it invests in a project and receives the estimated cash inflows. Question 23 In the context of capital budgeting, risk generally refers to Question 24 What is the IRR for the following project if its initial after tax cost is $5,000,000 and it is expected to provide after-tax operating cash inflows of$1,800,000 in year 1, $1,900,000 in year 2,$1,700,000 in year 3 and \$1,300,000 in year 4? Question 25 In capital budgeting, the preferred approaches in assessing whether a project is acceptable integrate time value procedures, risk and return considerations, valuation concepts, and the required payback period. Question 26 If its IRR is greater than the cost of capital, a project should be accepted. Question 27 Net present value is considered a sophisticated capital budgeting technique since it gives explicit consideration to the time value of money. Question 28 If a firm has a limited capital budget and too many good capital projects to fund them all, it is said to be facing the problem of Question 29 Which of the following statements is false? Question 30 The minimum return that must be earned on a project in order to leave the firm’s value unchanged is
2020-01-18 21:12:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23158597946166992, "perplexity": 1042.8309570898737}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250593937.27/warc/CC-MAIN-20200118193018-20200118221018-00518.warc.gz"}
http://math.stackexchange.com/questions/158849/determining-if-a-language-is-recursively-enumerable
# Determining if a language is Recursively Enumerable Here is a problem from John Hopcroft's "Introduction to Automata Theory" that I'm having a hard time trying to understand. Exercise 9.2.5: Let L be recursively enumerable and let Overscript[L, _] be non-RE. Consider the > language: L' = {0w | w is in L} $\cup$ { 1w | w is not in L } Can you say for certain whether L' or its complement are recursive, RE, or non-> RE? Justify your answer: "Note that the new language defined in the displayed text should be L'; it is > > different from the given language L, of course. Also, we'll use -L for the > > complement of L in what follows. Suppose L' were RE. Then we could design a TM M for -L as follows. Given input w, M changes its input to 1w and simulates the hypothetical TM for L'. If that TM > accepts, then w is in -L, so M should accept. If the TM for L' never accepts, > then neither does M. Thus, M would accept exactly -L, which contradicts the fact > that -L is not RE. We conclude that L' is not RE." and here is what I don't get: ..."Thus, M would accept exactly -L, which contradicts the fact that -L is not RE".... Why? M is the Turing Machine for -L, so it is supposed to accept -L, right? If we can construct a Turing Machine for a language, then it is RE. We have constructed a TM for something we know is not RE (-L), based on the TM for the supposedly RE L'. So what? how does this leads to the conclusion that L' is not RE? I'm very confused... Any help will be much appreciated Thanks! - It was assumed that the complement of $L$ is not r.e. –  André Nicolas Jun 15 '12 at 21:06 The hypothesis of the exercise is that $L$ is RE and $-L$ is not RE. The argument is that if $L'$ were RE, then $-L$ would also be RE, contradicting this hypothesis. - $L'$ is not c.e. If $L'$ was c.e., then you can enumerate computably all the elements of $L'$. From this computable enumeration of $L'$, you can make an enumeration of $\overline{L}$ as follows: in the enumeration of $L'$, if you see a string enumerated that starts with $1$ of the form $1w$, then put $w$ into the enumeration of $\overline{L}$. This gives an enumeration of $\overline{L}$, so $\overline{L}$ is c.e. Contradicting the assumption. Since $L'$ is not c.e., it can not be computable. Note that $-L' = \{0w : w \notin L\} \cup \{1w : w \in L\}$. Apply the same argument to $-L'$ to conclude it is not c.e.
2014-03-09 17:05:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8988848328590393, "perplexity": 275.52232290077916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394009885941/warc/CC-MAIN-20140305085805-00069-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/did-i-proved-this-limit-correctly.283382/
# Did i proved this limit correctly http://img360.imageshack.us/img360/1820/78810094ft8.gif [Broken] Last edited by a moderator: ## Answers and Replies it looks fine to me!
2021-01-22 10:40:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9661030173301697, "perplexity": 11909.98654759501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529179.46/warc/CC-MAIN-20210122082356-20210122112356-00474.warc.gz"}
https://tex.stackexchange.com/questions/480891/footnotes-in-bibliography-with-biblatex
# Footnotes in bibliography with biblatex In my PhD thesis I include a bibliography containing my own publications at the beginning. Not all of them are relevant to the PhD thesis though, e.g. because they resulted from my masters. I would like to show this using a footnote or an annotation at the bottom of the reference list. \documentclass{scrbook} \usepackage{mwe} \usepackage[defernumbers=true]{biblatex} \usepackage{filecontents} \begin{filecontents}{refs.bib} @misc{ref1, note = {Article 1}, keywords={former}, keywords={myarticles}} @misc{ref2, note = {Article 2}, keywords={myarticles}} @misc{ref3, note = {Proceedings 1}, keywords={former}, keywords={myconferences}} @misc{ref4, note = {Proceedings 2}, keywords={myconferences}} @misc{ref5, note = {Reference 1}} @misc{ref6, note = {Reference 2}} \end{filecontents} \begin{document} \nocite{*} \defbibnote{thepostnote}{* refer to previous work} \newrefcontext[labelprefix=A] \newrefcontext[labelprefix=C] \printbibliography[keyword=myconferences, title={My Conference Contributions}, heading=subbibliography, postnote=thepostnote] \newrefcontext \printbibliography[notkeyword=myarticles, notkeyword=myconferences, title={Regular References}, heading=subbibliography, resetnumbers=true] \end{document} Ideally, the entries should look like this: [A1]* Article 1 [A2] Article 2 * refer to previous work Currently, my workaround is to split the subbibliographies into two parts and use \newrefcontext[labelprefix=*A] and \newrefcontext[labelprefix=A], respectively. This is kind of ugly though, preferably the star should appear after the bracket. Firstly, if you want to give several keywords you must give them in the same keywords field separated with a comma. Secondly, and more interestingly, you can add a star by redefining the labelnumberwidth format. \DeclareFieldFormat{labelnumberwidth}{% \mkbibbrackets{#1}% \ifkeyword{former} {\makebox[0pt][l]{\textsuperscript{*}}} {}} The \ifkeyword and \textsuperscript{*} should be self-explanatory. I wrapped the \textsuperscript{*} in a \makebox[0pt][l] so that the asterisk does not take up any space. Normally the numbers are right-aligned and if the asterisk took up space the closing bracket of [A2] would align with the right end of [A1]* and we'd get [A1]* [A2] [A1]* [A2] The alternative would have been to left-align the labels, but that would result in [8] [9] [10] [8] [9] [10] If you have a current biblatex version you may also be interested in the option locallabelwidth to avoid excess space in the last bibliography. \documentclass{article} \usepackage[defernumbers=true, locallabelwidth]{biblatex} \DeclareFieldFormat{labelnumberwidth}{% \mkbibbrackets{#1}% \ifkeyword{former} {\makebox[0pt][l]{\textsuperscript{*}}} {}} \usepackage{filecontents} \begin{filecontents}{\jobname.bib} @misc{ref1, note = {Article 1}, keywords={former,myarticles}} @misc{ref2, note = {Article 2}, keywords={myarticles}} @misc{ref3, note = {Proceedings 1}, keywords={former,myconferences}} @misc{ref4, note = {Proceedings 2}, keywords={myconferences}} @misc{ref5, note = {Reference 1}} @misc{ref6, note = {Reference 2}} \end{filecontents} \begin{document} \nocite{*} \defbibnote{thepostnote}{* refer to previous work}
2021-10-21 17:19:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6294143795967102, "perplexity": 5383.349355771157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00688.warc.gz"}
http://ludovicarnold.altervista.org/teaching/optimization-machine-learning/model-selection/
# Model selection Course completion 75% A learning algorithm consists in an optimization algorithm applied to the parameters of a model in order to minimize a specific loss. Nonetheless, it is often the case that the learning algorithm itself depends on some parameters being set, as e.g. the complexity of the model (the degree $K$ in the previous example) or the learning rate of a gradient descent procedure. Such parameters which are outside of the main optimization procedure are called hyper-parameters. Accordingly, the problem of choosing suitable values for the hyper-parameters is called hyper-parameter selection or model selection and consists in an optimization problem in which learning models is considered as a sub-problem. Although we need to optimize the hyper-parameters w.r.t the performance on some dataset, we cannot choose model complexity according to the training set because it would lead to poor generalization. In our example, choosing the best degree $K$ according to the training set would inevitably lead to choosing higher degrees for the polynomial, even though they do not make for a better fit on the testing set. However, the testing set is meant to be used for evaluating the performance of a model on unseen data and cannot therefore be used during hyper-parameter selection. If it were, the optimization process would choose values particularly suited to maximize performance on the test set and thus artificially increase the test set performance. To solve this problem, the solution usually retained is to use a third dataset called a validation dataset to optimize the hyper-parameters. The testing error can then be used safely to evaluate performance. Next: Changing representations
2019-02-18 09:24:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7659472823143005, "perplexity": 292.6366422095866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484772.43/warc/CC-MAIN-20190218074121-20190218100121-00587.warc.gz"}
http://clay6.com/qa/40887/if-a-b-c-and-d-find-iii-a-cap-c-cap-d
Browse Questions Home  >>  CBSE XI  >>  Math  >>  Sets If $A=\{3,5,7,9,11\},B=\{7,9,11,13\},C=\{11,13,15\}$ and $D=\{15,17\}$.Find $A\cap C\cap D$ $\begin{array}{1 1}(A)\;\{3,5,7,9,11\}&(B)\;\{11,13,15\}\\(C)\;\{3,5,7,9,11,13,15\}&(D)\;\phi\end{array}$ There is no common element in A,C,D $\therefore A\cap C\cap D=\phi$ Hence (D) is the correct answer.
2016-10-23 09:41:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6739525198936462, "perplexity": 361.348963869296}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719215.16/warc/CC-MAIN-20161020183839-00215-ip-10-171-6-4.ec2.internal.warc.gz"}
https://bertinsfashion.com/college-grove/how-to-start-latex-document.php
# How To Start Latex Document Accessing the Service Menu and Starting the Printer in. Adding line numbers to documents. 8. The most common use is continuous line numbering throughout the document. Multi-column and multi-row cells in LaTeX, When you are beginning to write a LaTeX document, LaTeX Source of Example 1. When you want to start a new paragraph of text. ### Getting Started with LaTeX School of Mathematics Accessing the Service Menu and Starting the Printer in. ... the primer Getting Started with LaTeX is also available in the form of a LaTeX2e input file, and as a DVI file or PDF file. The Preamble of the LaTeX Input file;, 24/12/2011В В· http://quicklatex.blogspot.com - This is the first tutorial in the video tutorial series on how to learn Latex. In this tutorial I will show you how to. How to Make Your PDF Documents when you include math in your document, LaTeX is going to default to To check whether you have a correct PDF file, start A very basic guide to start writing in LaTeX right now. Unless you are here by chance you would know what Latex is. But just to be on the same page, Latex is a Page layout in LATEX Piet van Oostrum… Dept. of Computer Science Utrecht University September 6, 1998 Abstract This article describes how to customize the page 1/02/2016В В· Hello, this video is about How to write your first program/ Document in Latex / TeXstudio, TeXstudio is Free latex editor. The vidoe is in complete HD 24/12/2011В В· http://quicklatex.blogspot.com - This is the first tutorial in the video tutorial series on how to learn Latex. In this tutorial I will show you how to Using Sweave and knitr Overview Sweave enables the embedding of R code within LaTeX documents to generate a PDF file To start a new Sweave document, When you are beginning to write a LaTeX document, LaTeX Source of Example 1. When you want to start a new paragraph of text LaTeX Notes: Structuring Large Documents. As soon as you start to produce documents include{lastfile} \end{document} Again, LaTeX looks for 1/02/2016В В· Hello, this video is about How to write your first program/ Document in Latex / TeXstudio, TeXstudio is Free latex editor. The vidoe is in complete HD LaTeX Line and Page Breaking The first thing LaTeX does when processing ordinary text is to translate your input file into a string of LaTeX to start a Beginner’s LaTeX Guide This guide should be used as a starting point for learning Latex \begin{document} This is some text about Latex. LaTeX/Document Structure. an input file for a LaTeX document could start with the line Instructs LaTeX to typeset the document in two columns instead of one. LaTeX introduction / Quick-start guide The purpose of the preamble is to tell LaTeX what kind of document you will set up and what packages you are going to need. Accessing the Service Menu and Starting the Printer in Diagnostic Mode for the HP Latex 600, 820, and 850, HP Scitex LX800, and HP Designjet L65500 Printer Series When you are beginning to write a LaTeX document, LaTeX Source of Example 1. When you want to start a new paragraph of text What i want is to use a different style on the page numbering when the appendixes start How do i make custom page numbering in latex? Start of Document What i want is to use a different style on the page numbering when the appendixes start How do i make custom page numbering in latex? Start of Document 24/12/2011В В· http://quicklatex.blogspot.com - This is the first tutorial in the video tutorial series on how to learn Latex. In this tutorial I will show you how to ### Getting Started with LaTeX School of Mathematics How do i make custom page numbering in latex? Stack Overflow. A very basic guide to start writing in LaTeX right now. Unless you are here by chance you would know what Latex is. But just to be on the same page, Latex is a, LaTeX Line and Page Breaking The first thing LaTeX does when processing ordinary text is to translate your input file into a string of LaTeX to start a. ### Getting Started with LaTeX School of Mathematics Running LaTeX on Your Windows Computer (Jul 4 2013). Page layout in LATEX Piet van Oostrum… Dept. of Computer Science Utrecht University September 6, 1998 Abstract This article describes how to customize the page How to use graphics in LaTeX. It’s important to include the trailing / as this rewrites the includegraphics path to start Creating your first LaTeX document;. • Getting Started with LaTeX School of Mathematics • Accessing the Service Menu and Starting the Printer in • How to use graphics in LaTeX. It’s important to include the trailing / as this rewrites the includegraphics path to start Creating your first LaTeX document; 3/11/2016В В· Edit Article How to Get Started Using LaTeX. In this Article: Downloading and Starting the Programs Setting up a Document Writing a Document Community Q&A LaTeX/Document Structure. an input file for a LaTeX document could start with the line Instructs LaTeX to typeset the document in two columns instead of one. 24/12/2011В В· http://quicklatex.blogspot.com - This is the first tutorial in the video tutorial series on how to learn Latex. In this tutorial I will show you how to In addition to the HTML pages listed below, the primer Getting Started with LaTeX is also available in the form of a LaTeX2e input file, and as a DVI file or PDF file. Adding line numbers to documents. 8. The most common use is continuous line numbering throughout the document. Multi-column and multi-row cells in LaTeX LaTeX Line and Page Breaking The first thing LaTeX does when processing ordinary text is to translate your input file into a string of LaTeX to start a LaTeX Line and Page Breaking The first thing LaTeX does when processing ordinary text is to translate your input file into a string of LaTeX to start a Running LaTeX on Your Windows Computer July icon on the desktop to start TeXnicCenter. Do File Bar after LaTeX has processed the document. typical problems that arise while writing a thesis with LaTeX and suggests (the numbers start from 1) throughout the document, \includeonly{filename1 24/12/2011В В· http://quicklatex.blogspot.com - This is the first tutorial in the video tutorial series on how to learn Latex. In this tutorial I will show you how to In addition to the HTML pages listed below, the primer Getting Started with LaTeX is also available in the form of a LaTeX2e input file, and as a DVI file or PDF file. ... the primer Getting Started with LaTeX is also available in the form of a LaTeX2e input file, and as a DVI file or PDF file. The Preamble of the LaTeX Input file; How to Make Your PDF Documents when you include math in your document, LaTeX is going to default to To check whether you have a correct PDF file, start Beginner’s LaTeX Guide This guide should be used as a starting point for learning Latex \begin{document} This is some text about Latex. Accessing the Service Menu and Starting the Printer in Diagnostic Mode for the HP Latex 600, 820, and 850, HP Scitex LX800, and HP Designjet L65500 Printer Series Running LaTeX on Your Windows Computer July icon on the desktop to start TeXnicCenter. Do File Bar after LaTeX has processed the document. Page layout in LATEX Piet van Oostrum… Dept. of Computer Science Utrecht University September 6, 1998 Abstract This article describes how to customize the page Using Sweave and knitr Overview Sweave enables the embedding of R code within LaTeX documents to generate a PDF file To start a new Sweave document, When you are beginning to write a LaTeX document, LaTeX Source of Example 1. When you want to start a new paragraph of text ## How to write your first document in Latex / TeXstudio Free Running LaTeX on Your Windows Computer (Jul 4 2013). Adding line numbers to documents. 8. The most common use is continuous line numbering throughout the document. Multi-column and multi-row cells in LaTeX, In addition to the HTML pages listed below, the primer Getting Started with LaTeX is also available in the form of a LaTeX2e input file, and as a DVI file or PDF file.. ### Learn Latex in 5 minutes YouTube Getting Started with LaTeX School of Mathematics. LaTeX Notes: Structuring Large Documents. As soon as you start to produce documents include{lastfile} \end{document} Again, LaTeX looks for, Using Sweave and knitr Overview Sweave enables the embedding of R code within LaTeX documents to generate a PDF file To start a new Sweave document,. LaTeX Line and Page Breaking The first thing LaTeX does when processing ordinary text is to translate your input file into a string of LaTeX to start a Adding line numbers to documents. 8. The most common use is continuous line numbering throughout the document. Multi-column and multi-row cells in LaTeX LaTeX/Document Structure. an input file for a LaTeX document could start with the line Instructs LaTeX to typeset the document in two columns instead of one. How to Make Your PDF Documents when you include math in your document, LaTeX is going to default to To check whether you have a correct PDF file, start LaTeX Notes: Structuring Large Documents. As soon as you start to produce documents include{lastfile} \end{document} Again, LaTeX looks for A very basic guide to start writing in LaTeX right now. Unless you are here by chance you would know what Latex is. But just to be on the same page, Latex is a In addition to the HTML pages listed below, the primer Getting Started with LaTeX is also available in the form of a LaTeX2e input file, and as a DVI file or PDF file. Templates and Sample Files. Always a good place to start... A Simple Sample LaTeX Included is a sample file to illustrate the use of BiBTeX with a LaTeX document. Beginner’s LaTeX Guide This guide should be used as a starting point for learning Latex \begin{document} This is some text about Latex. A very basic guide to start writing in LaTeX right now. Unless you are here by chance you would know what Latex is. But just to be on the same page, Latex is a In addition to the HTML pages listed below, the primer Getting Started with LaTeX is also available in the form of a LaTeX2e input file, and as a DVI file or PDF file. Page layout in LATEX Piet van Oostrum… Dept. of Computer Science Utrecht University September 6, 1998 Abstract This article describes how to customize the page ... the primer Getting Started with LaTeX is also available in the form of a LaTeX2e input file, and as a DVI file or PDF file. The Preamble of the LaTeX Input file; LaTeX Line and Page Breaking The first thing LaTeX does when processing ordinary text is to translate your input file into a string of LaTeX to start a How to Make Your PDF Documents when you include math in your document, LaTeX is going to default to To check whether you have a correct PDF file, start ... the primer Getting Started with LaTeX is also available in the form of a LaTeX2e input file, and as a DVI file or PDF file. The Preamble of the LaTeX Input file; Templates and Sample Files. Always a good place to start... A Simple Sample LaTeX Included is a sample file to illustrate the use of BiBTeX with a LaTeX document. Adding line numbers to documents. 8. The most common use is continuous line numbering throughout the document. Multi-column and multi-row cells in LaTeX Running LaTeX on Your Windows Computer July icon on the desktop to start TeXnicCenter. Do File Bar after LaTeX has processed the document. LaTeX introduction / Quick-start guide The purpose of the preamble is to tell LaTeX what kind of document you will set up and what packages you are going to need. Page layout in LATEX Piet van Oostrum… Dept. of Computer Science Utrecht University September 6, 1998 Abstract This article describes how to customize the page What i want is to use a different style on the page numbering when the appendixes start How do i make custom page numbering in latex? Start of Document A very basic guide to start writing in LaTeX right now. Unless you are here by chance you would know what Latex is. But just to be on the same page, Latex is a typical problems that arise while writing a thesis with LaTeX and suggests (the numbers start from 1) throughout the document, \includeonly{filename1 LaTeX Line and Page Breaking The first thing LaTeX does when processing ordinary text is to translate your input file into a string of LaTeX to start a Beginner’s LaTeX Guide This guide should be used as a starting point for learning Latex \begin{document} This is some text about Latex. typical problems that arise while writing a thesis with LaTeX and suggests (the numbers start from 1) throughout the document, \includeonly{filename1 Using Sweave and knitr Overview Sweave enables the embedding of R code within LaTeX documents to generate a PDF file To start a new Sweave document, Accessing the Service Menu and Starting the Printer in Diagnostic Mode for the HP Latex 600, 820, and 850, HP Scitex LX800, and HP Designjet L65500 Printer Series When you are beginning to write a LaTeX document, LaTeX Source of Example 1. When you want to start a new paragraph of text What i want is to use a different style on the page numbering when the appendixes start How do i make custom page numbering in latex? Start of Document typical problems that arise while writing a thesis with LaTeX and suggests (the numbers start from 1) throughout the document, \includeonly{filename1 LaTeX Line and Page Breaking The first thing LaTeX does when processing ordinary text is to translate your input file into a string of LaTeX to start a ... the primer Getting Started with LaTeX is also available in the form of a LaTeX2e input file, and as a DVI file or PDF file. The Preamble of the LaTeX Input file; LaTeX Line and Page Breaking The first thing LaTeX does when processing ordinary text is to translate your input file into a string of LaTeX to start a LaTeX introduction / Quick-start guide The purpose of the preamble is to tell LaTeX what kind of document you will set up and what packages you are going to need. ### How to write your first document in Latex / TeXstudio Free How do i make custom page numbering in latex? Stack Overflow. LaTeX Notes: Structuring Large Documents. As soon as you start to produce documents include{lastfile} \end{document} Again, LaTeX looks for, Templates and Sample Files. Always a good place to start... A Simple Sample LaTeX Included is a sample file to illustrate the use of BiBTeX with a LaTeX document.. How to write your first document in Latex / TeXstudio Free. Accessing the Service Menu and Starting the Printer in Diagnostic Mode for the HP Latex 600, 820, and 850, HP Scitex LX800, and HP Designjet L65500 Printer Series, Using Sweave and knitr Overview Sweave enables the embedding of R code within LaTeX documents to generate a PDF file To start a new Sweave document,. ### Running LaTeX on Your Windows Computer (Jul 4 2013) LaTeX introduction / Quick-start guide. Page layout in LATEX Piet van Oostrum… Dept. of Computer Science Utrecht University September 6, 1998 Abstract This article describes how to customize the page typical problems that arise while writing a thesis with LaTeX and suggests (the numbers start from 1) throughout the document, \includeonly{filename1. Adding line numbers to documents. 8. The most common use is continuous line numbering throughout the document. Multi-column and multi-row cells in LaTeX Using Sweave and knitr Overview Sweave enables the embedding of R code within LaTeX documents to generate a PDF file To start a new Sweave document, In addition to the HTML pages listed below, the primer Getting Started with LaTeX is also available in the form of a LaTeX2e input file, and as a DVI file or PDF file. ... the primer Getting Started with LaTeX is also available in the form of a LaTeX2e input file, and as a DVI file or PDF file. The Preamble of the LaTeX Input file; 24/12/2011В В· http://quicklatex.blogspot.com - This is the first tutorial in the video tutorial series on how to learn Latex. In this tutorial I will show you how to How to Make Your PDF Documents when you include math in your document, LaTeX is going to default to To check whether you have a correct PDF file, start Running LaTeX on Your Windows Computer July icon on the desktop to start TeXnicCenter. Do File Bar after LaTeX has processed the document. How to use graphics in LaTeX. It’s important to include the trailing / as this rewrites the includegraphics path to start Creating your first LaTeX document; LaTeX Line and Page Breaking The first thing LaTeX does when processing ordinary text is to translate your input file into a string of LaTeX to start a Adding line numbers to documents. 8. The most common use is continuous line numbering throughout the document. Multi-column and multi-row cells in LaTeX LaTeX Notes: Structuring Large Documents. As soon as you start to produce documents include{lastfile} \end{document} Again, LaTeX looks for Using Sweave and knitr Overview Sweave enables the embedding of R code within LaTeX documents to generate a PDF file To start a new Sweave document, Page layout in LATEX Piet van Oostrum… Dept. of Computer Science Utrecht University September 6, 1998 Abstract This article describes how to customize the page How to Make Your PDF Documents when you include math in your document, LaTeX is going to default to To check whether you have a correct PDF file, start 1/02/2016В В· Hello, this video is about How to write your first program/ Document in Latex / TeXstudio, TeXstudio is Free latex editor. The vidoe is in complete HD How to Make Your PDF Documents when you include math in your document, LaTeX is going to default to To check whether you have a correct PDF file, start How to use graphics in LaTeX. It’s important to include the trailing / as this rewrites the includegraphics path to start Creating your first LaTeX document; Beginner’s LaTeX Guide This guide should be used as a starting point for learning Latex \begin{document} This is some text about Latex. LaTeX/Document Structure. an input file for a LaTeX document could start with the line Instructs LaTeX to typeset the document in two columns instead of one. LaTeX Notes: Structuring Large Documents. As soon as you start to produce documents include{lastfile} \end{document} Again, LaTeX looks for LaTeX Notes: Structuring Large Documents. As soon as you start to produce documents include{lastfile} \end{document} Again, LaTeX looks for Templates and Sample Files. Always a good place to start... A Simple Sample LaTeX Included is a sample file to illustrate the use of BiBTeX with a LaTeX document. LaTeX introduction / Quick-start guide The purpose of the preamble is to tell LaTeX what kind of document you will set up and what packages you are going to need. Adding line numbers to documents. 8. The most common use is continuous line numbering throughout the document. Multi-column and multi-row cells in LaTeX LaTeX Line and Page Breaking The first thing LaTeX does when processing ordinary text is to translate your input file into a string of LaTeX to start a What i want is to use a different style on the page numbering when the appendixes start How do i make custom page numbering in latex? Start of Document typical problems that arise while writing a thesis with LaTeX and suggests (the numbers start from 1) throughout the document, \includeonly{filename1 Accessing the Service Menu and Starting the Printer in Diagnostic Mode for the HP Latex 600, 820, and 850, HP Scitex LX800, and HP Designjet L65500 Printer Series Using Sweave and knitr Overview Sweave enables the embedding of R code within LaTeX documents to generate a PDF file To start a new Sweave document, A very basic guide to start writing in LaTeX right now. Unless you are here by chance you would know what Latex is. But just to be on the same page, Latex is a 1/02/2016В В· Hello, this video is about How to write your first program/ Document in Latex / TeXstudio, TeXstudio is Free latex editor. The vidoe is in complete HD In addition to the HTML pages listed below, the primer Getting Started with LaTeX is also available in the form of a LaTeX2e input file, and as a DVI file or PDF file. When you are beginning to write a LaTeX document, LaTeX Source of Example 1. When you want to start a new paragraph of text How to use graphics in LaTeX. It’s important to include the trailing / as this rewrites the includegraphics path to start Creating your first LaTeX document;
2021-06-20 12:56:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9543272852897644, "perplexity": 1950.8969011822785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487662882.61/warc/CC-MAIN-20210620114611-20210620144611-00081.warc.gz"}
https://mathematica.stackexchange.com/questions/172028/sum-using-variables-then-evaluated-with-values-gives-different-result-than-sum-w
# Sum using variables then evaluated with values gives different result than sum with values I'm trying to do a sum symbolically. However, Mathematica is giving me a different result if I do the sum with numbers or symbols. What's causing this error? $Assumptions = m \[Element] Integers && n \[Element] Integers && m >= 0 && n >= 0; f[i_, j_] := If[OddQ[Min[i, j]], (Min[i, j] + (-1)^Max[i, j])/2, Ceiling[Min[i, j]/2]]; Table[f[i, j], {i, 0, 1}, {j, 0, 4}] // TableForm F = Sum[f[i, j], {i, 0, m}, {j, 0, n}]; F /. {m -> 1, n -> 4} Sum[f[i, j], {i, 0, 1}, {j, 0, 4}] Result 0 0 0 0 0 0 0 1 0 1 4 2 • If you don't mind me asking, what do you intend to do with the symbolic result, if one can be obtained? I tried running your symbolic calculation, but it ran for minutes, after which I aborted it. How long did it take to run on your system? – MarcoB Apr 26 '18 at 22:39 • About 1-2 minutes. And it's just for fun, I'm trying to come up with an analytical expression to count the number of triangles in this rug (youtube.com/watch?v=HViA6N3VeHw&t=12s) – user1543042 Apr 26 '18 at 23:54 • OddQ on symbolic input will evaluate to False. Might be able to achieve the desired effect using Piecewise instead. – Daniel Lichtblau Apr 27 '18 at 14:24 ## 1 Answer I'm not entirely sure why MMA is ignoring the $Assumptions, but here is the correct expression: Table[ Sum[f[i, j], {i, 0, n}, {j, 0, m}] == 1/48 (-3 (-1 + (-1)^m) (-1 + (-1)^n) + 12 m n + 2 Min[m, n] (2 + 3 (-1)^m + 3 (-1)^n - 2 Min[m, n]^2 + 6 m n)) , {n, 0, 12}, {m, 0, 12}] // Flatten // Tally (* {{True, 169}} *) • How did you get that to evaluate? What version are you using? – user1543042 Apr 27 '18 at 1:09 • @user1543042 I didn't get it to evaluate; I calculated the sum myself. – AccidentalFourierTransform Apr 27 '18 at 1:25 • What method did you use to calculate this? – user1543042 May 11 '18 at 19:22 • @user1543042 You should ask on math.SE. You will get much better answers than here. – AccidentalFourierTransform May 11 '18 at 20:51
2019-12-16 02:25:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2191767543554306, "perplexity": 1539.330230539991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541315293.87/warc/CC-MAIN-20191216013805-20191216041805-00077.warc.gz"}
https://datascience.stackexchange.com/questions/13029/what-are-the-advantages-disadvantages-of-off-policy-rl-vs-on-policy-rl
# What are the advantages / disadvantages of off-policy RL vs on-policy RL? There are various algorithms for reinforcment learning (RL). One way to group them is by "off-policy" and "on-policy". I've heard that SARSA is on-policy, while Q-Learning is off-policy. I think they work as follows: My questions are: • How exactly is "on-policy RL" and "off-policy RL" defined? • Please also let me know if there is an error in my pseudocode Jul 27 '16 at 14:35 • I think a good place to start to understand this would be this recent paper : jmlr.org/proceedings/papers/v32/silver14.pdf Aug 2 '16 at 6:47 This was answered in cross-validated and stackoverflow: The reason that Q-learning is off-policy is that it updates its Q-values using the Q-value of the next state $$s′$$ and the greedy action $$a′$$. In other words, it estimates the return (total discounted future reward) for state-action pairs assuming a greedy policy were followed despite the fact that it's not following a greedy policy. The reason that SARSA is on-policy is that it updates its Q-values using the Q-value of the next state $$s′$$ and the current policy's action $$a′′$$. It estimates the return for state-action pairs assuming the current policy continues to be followed. These slides offer some insight on pros and cons of each one: • On-policy methods: • attempt to evaluate or improve the policy that is used to make decisions, • often use soft action choice, i.e. $$\pi(s,a) >0, \forall a$$, • commit to always exploring and try to find the best policy that still explores, • may become trapped in local minima. • Off-policy methods: • evaluate one policy while following another, e.g. tries to evaluate the greedy policy while following a more exploratory scheme, • the policy used for behaviour should be soft, • policies may not be sufficiently similar, • may be slower (only the part after the last exploration is reliable), but remains more flexible if alternative routes appear. For reference, these are the formulations of Q-learning and SARSA from Sutton and Barto seminal book: P.S.: I referenced and quoted the original answer from a different stackexchange site, as indicated in this meta question.
2022-01-17 17:53:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3419763147830963, "perplexity": 1830.2506805548603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300574.19/warc/CC-MAIN-20220117151834-20220117181834-00650.warc.gz"}
https://de.zxc.wiki/wiki/Polyurethane
# Polyurethanes General structure of polyurethanes Repeat unit for linear polyurethanes that have been produced from a diol and diisocyanate. The urethane groups are marked in blue . R 1 stands for the “remainder” of the diol used for the synthesis (HO − R 1 −OH), R 2 for the “remainder” of the diisocyanate (OCN − R 2 −NCO). Polyurethanes ( abbreviation PUR ; in linguistic usage also PU ) are plastics or synthetic resins that result from the polyaddition reaction of dialcohols ( diols ) or polyols with polyisocyanates . The urethane group ( ) is characteristic of polyurethanes . ${\ displaystyle \ mathrm {{-} NH {-} CO {-} O {-} \}}$ Diols and diisocyanates lead to linear polyurethanes ; crosslinked polyurethanes can be produced by reacting triisocyanate-diisocyanate mixtures with triol-diol mixtures. The properties of PU can be varied within a wide range. Depending on the degree of crosslinking and / or the isocyanate or OH component used, thermosets , thermoplastics or elastomers are obtained . Quantitatively, polyurethane foams , as soft or rigid foam is most important. However, polyurethanes are also used as molding compounds for compression molding , as casting resins (isocyanate resins), as ( textile ) elastic fiber materials, polyurethane paints and as polyurethane adhesives . ## history In 1937 a research group led by Otto Bayer synthesized polyurethanes for the first time from 1,4-butanediol and octane-1,8-diisocyanate and later from hexane-1,6-diisocyanate in the laboratories of the IG Farben plant in Leverkusen . The corresponding polyurethane was called Igamid U or Perlon U. Further tests showed that tolylene diisocyanate was significantly more reactive than 1,6-hexane diisocyanate and that reactions with triols led to three-dimensionally crosslinked polyurethanes. In 1940 industrial production began in Leverkusen. However, due to the Second World War and the associated scarcity of raw materials, the market for polyurethanes developed only very slowly at first. Therefore, until the end of World War II, polyurethanes were only used for military purposes in aircraft construction. In 1952, less than 100 t of the important polyisocyanate toluene diisocyanate (TDI) were available per year . From 1952 to 1954, polyester foams were developed, which further increased commercial interest in polyurethanes. With the use of polyether polyols, the importance of polyurethanes grew rapidly. The greater scope for variation in the production of polyether polyols has resulted in a considerable expansion of the applications. In 1960, over 45,000 t of foam were produced. By 2002, global consumption had risen to around 9 million tons of polyurethane, and by 2007 it rose further to over 12 million tons. The annual growth rate is approx. 5%. In 2011, production in Germany alone, with the main producers Covestro and BASF, was just under 1 million tons, of which around 32% was for building insulation, 20% for furniture and mattresses, 14% for automotive engineering and 10% for paints and coatings. ## properties Polyurethanes can have different properties depending on the choice of polyisocyanate and polyol . The density of unfoamed polyurethane varies between around 1000 and 1250 kg / m³. Typical densities are around 5 to 40 kg / m³ for soft block foam or 30 to 90 kg / m³ for hard block foam. toxicity Isocyanates can trigger allergies and are suspected of causing cancer . When polyurethanes have reacted completely and no longer contain any monomers, they generally no longer have any harmful properties. Furthermore, volatile additives can be added to the polyurethane, such as flame retardants or plasticizers , which can be absorbed dermally (skin) or inhalatively (breathing) depending on the use. Guidelines and information sheets for the safe handling of polyurethane raw materials can be obtained from the manufacturers or from ISOPA (European Association of Diisocyanate and Polyol Manufacturers). ## Manufacturing Diisocyanate monomers (selection) Hexamethylene-1,6-diisocyanate (HDI) Toluene-2,4-diisocyanate (TDI) Diphenylmethane 4,4'-diisocyanate (MDI) Isophorone diisocyanate (IPDI) common diol components Polyether -Polyol: Oxygen atoms of the ether are marked in blue . Polyester polyol made from adipic acid and 1,4-butanediol . Oxygen atoms and carbon atoms of the carboxylic acid ester groups are marked in blue . Polyurethanes result from the polyaddition reaction of polyisocyanates with polyhydric alcohols, the polyols . The linkage occurs through the reaction of an isocyanate group (-N = C = O) of one molecule with a hydroxyl group (-OH) of another molecule to form a urethane group (-NH-CO-O-). In contrast to polycondensation , no by-products are split off. Only a few different isocyanate components are used: Due to the high volatility and the consequent dangerous processing of the above monomers, in most cases only prepolymers are used by processors, which, however, always contain a residual monomer content. This is particularly the case with HDI. The usual residual monomer proportions in HDI trimer products (e.g. Desmodur N, Tolonate HDT, Basonat or Duranate) are <0.5% HDI and are therefore classified as non-toxic according to the manufacturer's classification and thus in the professional area, taking into account the manufacturer's protective instructions usable. The later properties are essentially determined by the polyol component, because in order to achieve the desired properties it is usually not the isocyanate component that is adapted (chemically changed), but the polyol component. Mechanical properties can be influenced depending on the chain length and number of branches in the polyol. For example, the use of polyester polyols in addition to the more common polyether polyols leads to better stability, because polyester polyols have a higher melting point and thus solidify when the polyurethane is applied. Polyurethane formation requires at least two different monomers, in the simplest case a diol and a diisocyanate. The polyreaction takes place in stages. First, a bifunctional molecule with an isocyanate group (-N = C = O) and a hydroxyl group (-OH) is formed from diol and diisocyanate . This can react with other monomers at both ends . This creates short chains of molecules, so-called oligomers . These can react with other monomers, other oligomers or polymers that have already been formed . Polyaddition of 1,6-hexane diisocyanate with 1,4-butanediol (n ≈ 40) ### Networking Linear polyurethanes can be crosslinked with an excess of diisocyanate. The addition of an isocyanate group to a urethane group forms an allophanate group. By trimerizing three isocyanate groups, it is also possible to form an isocyanurate group. If multifunctional isocyanates are used, the highly branched polyisocyanurates (PIR) are formed, see there. Alternatively, crosslinked or branched polyurethanes can also be produced by adding substances with more than two isocyanate groups, such as PMDI , and triols, such as glycerol . The use of multiple amines, such as ethylenediamine , also leads to crosslinking. The reaction of isocyanates with amines leads to urea groups. These are still reactive and allow the addition of a further isocyanate group, forming a biuret group. If a certain polyurethane is to be produced in practice, there are two options: The direct reaction of a polyol with a polyisocyanate (one-step process) and the two-step process. In the two-stage process, two prepolymers are produced in the first step : with diisocyanates in excess, an NCO prepolymer is obtained when reacting with diols and an OH prepolymer is obtained when reacting with an excess of diols. Only in the second step does the actual polymerization take place by mixing the prepolymers. The two-stage process leads to a very wide-meshed crosslinking of the polymer and is important for flexible PUR foams. ### Foaming If a smaller amount of water is added to the reaction mixture, water reacts with isocyanate groups to form the corresponding unstable carbamic acid , which decomposes to form the amine with the elimination of carbon dioxide (CO 2 ) . This amine reacts with another isocyanate group to form the corresponding substituted urea . The release of CO 2 therefore does not lead to a termination of the polymerization. The resulting carbon dioxide foams the reaction mass. Reaction of isocyanate with water with the formation of CO 2 and the formation of a polyurea group The density of the foam produced can be varied by the amount of water added . ### Biogenic polyols As a rule, both the polyols and the polyisocyanates originate from the production of petrochemical raw materials, but polyols based on vegetable oils or lignin can also be used, see polyols . Castor oil can be used as a triol in coatings. ## application ### Foams Household sponges made from soft PUR foam PU thermal insulation in a plastic jacket composite pipe Spray cans for the production of PU rigid foam Polyurethane foam Foams can be made very easily from polyurethane. The special thing about PUR foams is that processing companies take semifinished product (foam in tailored form) or foams made of liquid components in place to establish ( situ , "Formed in-place foam") can. The components can also be brought into or onto industrial parts; this is where the foam is created. Soft PUR foams are used for a wide variety of purposes, especially as upholstery material (e.g. for furniture or car seats) as mattress foam, as carpet backing material, for textile lamination, as a cleaning sponge or as a filter material. PUR flexible foams are mostly open-cell and are available in a wide range of hardness and density. PUR rigid foams are mainly used for thermal insulation, e.g. B. in buildings, cooling devices, heat and cold storage and some pipe systems ( plastic jacket composite pipe , flexible composite pipes ) used. There are other, relatively new areas of application for PUR foams in vehicle construction (steering wheel, armrest, soft coating of handles, interior trim, dashboard, sound insulation, rattle protection, seals, transparent coating of wood decors). Polyurethane foams, which are designed as thermal insulation, have a closed-cell structure so that the cell gases with their low thermal conductivity remain in the foam cells. In the past, R 11 ( trichlorofluoromethane ) was often used as the cell gas. Because of the ozone-damaging properties of this halogenated hydrocarbon, it was largely replaced by carbon dioxide and currently by cyclopentane , with a mixture of cyclopentane (approx. 10 to 35%) and carbon dioxide then being contained in the foam cells. If the polyurethane foam is not encapsulated in a diffusion- proof manner with respect to the environment, the cell gases originally present are gradually replaced by air and water vapor through diffusion processes , whereby the thermal conductivity of the polyurethane foam increases. After production, polyurethane foams with carbon dioxide as the cell gas achieve thermal conductivities of approx. 0.029 to 0.033 W m −1 K −1 , while polyurethane foams with cyclopentane as the cell gas achieve thermal conductivities of approx. 0.022 to 0.027 W m −1 K - 1 . The polyurethane foams can be set both hard and flexible with different densities . PU rigid foam panels are available in different densities. Some of the products have fillers ( glass microballoons , aluminum powder ). Intended use are insulation materials as well as model and fixture construction . The foam is usually machined for this purpose . In the past, polyurethane foams were flame retardant with pentabromodiphenyl ether . Because of the toxicity of this substance, other flame retardants such as TCPP or expandable graphite are used today . One of the most important uses of polyurethanes is in paints and coatings. Here, because of their good adhesion properties , polyurethanes are used as primers and because of their high resistance to solvents, chemicals and weathering as top and clear coats in many areas of application. These include B. also coil coating lacquers and coatings for floors . Textile coatings and finishes as well as leather finishing should also be mentioned . Flat applications for bonding different, preferably flexible materials (in the area of ​​shoes, wood / furniture, automobile interiors) are also an important area of ​​application for polyurethane systems. In medicine, polyurethanes are used as liners in prosthetics of the lower extremities. Liquid systems, such as moisture-curing prepolymers, 2-component systems, high solids, polyurethane solutions and polyurethane dispersions , but also solids, e.g. B. Granules (TPUs) or powders that are melted or dissolved. ### Casting compounds • PU vacuum casting resins : Various products with a short pot life , mostly for prototypes or pre-series, e.g. B. Series materials ( thermoplastic - injection molding : ABS , PP , POM , PS , PC , PMMA , etc.) correspond resembling mechanical and thermal specifications or visual aspects. They are processed in a vacuum casting machine. Molds usually made of polyaddition-curing silicone . For example, for the duplication of parts manufactured using rapid prototyping techniques . • PU high-speed casting resins: relatively easy to process products for cast parts, models and tools that have a short pot life and do not have to be processed under vacuum. • Elastomer curing PU casting resins: Products with different degrees of hardness in the Shore A and Shore D range. For elastic to hard elastic parts, molds and tools. • Electrical casting compounds: for encapsulating / sheathing electrical and electronic components (potting) for the purpose of electrical insulation and protection against aggressive environmental conditions (chemical, temperature, vibrations, mechanical) • Edge casting compounds: for casting around / wrapping wood / MDF. With polyurethane as the edge potting material, there is reliable protection against knocks, scratches, etc. Edge potting systems can be made lightfast or lightfast. Flame protection also plays an important role, especially in public transport applications. The edge potting systems are also resistant to chemical and mechanical influences. ### Special uses Insulation layer made of rigid polyurethane foam when building a house Polyurethane is used to make wound pads , mattresses , shoe soles , seals , hoses , floors , insulating materials , varnishes , adhesives , sealants , skis , car seats , running tracks in stadiums , dashboards , casting compounds , latex-free condoms (condoms), cast floors and much more. • In the optical industry, polyurethane filled with certain polishing agents (e.g. ceria ) is used for the CNC polishing of optical functional surfaces. • In the laboratory equipment industry, polyurethane is used as a material for coating volumetric flasks . The usage temperature ranges from −30 to +80 ° C. Brief exposure to higher temperatures of up to 135 ° C is permitted, but in the long run this leads to a reduction in elasticity. • Book spines are glued with polyurethane in postpress. • Polyurethane is used in construction as a 1- or 2-component foam ( assembly foam , expansion foam ) for sealing joints in concrete before pouring, for stabilizing foundations, for lifting parts of buildings, floors, etc. and for installing windows and doors . In the Netherlands in particular, it is also used as flooring in residential buildings. • Rigid polyurethane foam is used as an insulating and insulating layer in sandwich elements. The elements consist of an inner and outer sheet (aluminum or coated steel sheet), with the space in between being filled by the swelling PU foam. These sandwich elements are mainly used in industrial construction in system halls, as they are prefabricated and can also be quickly assembled. In this way, wall and roof constructions are created in a short time, which are insulated and immediately finished inside and outside. Sandwich elements are also used in insulated roller and sliding doors (garage doors). In addition, rigid PUR foam is used for protection against the cold, as this foam slows down or prevents vapor diffusion. Usually, the pipes are coated with sheet metal (stainless steel, galvanized steel, aluminum, galvanized aluminum or aluminized sheet steel), similar to the sandwich method, and then filled with the two-component foam. • PU elastomer is often used for textile fibers. These fibers are not necessarily made from 100% polyurethane. Polyurethane is also used as microfoam for breathable membranes for rainwear. • Due to their excellent mechanical properties, certain polyurethanes are suitable for applications that require high wear resistance . So z. B. when transporting bulk goods through polyurethane hoses, or as a protective layer in pipes and pipe bends. It is also used as a sheathing for electrical cables (e.g. extension cables), for example in the popular H07BQ-F cable. • Another more special industrial processing spectrum can be found in prototype and sample construction as well as in the foundry industry . Here products made of polyurethane are used to manufacture models and tools of all kinds, but also series parts. • Polyurethane is used as a filler in the manufacture of multifilament tennis strings . • Modern footballs (e.g. Roteiro) are made entirely of polyurethane. • The outer shell of a bowling ball is made of polyurethane. • High-quality rubber boots are now also often made of polyurethane, as they are much lighter and more elastic at low temperatures than those made of PVC. In addition, the foamed polyurethane offers far better insulation against the cold. • Condoms / condoms without latex are made of polyurethane. These are thinner and should be more "feeling" and are well tolerated by people with latex allergies. Compared to the usual latex condoms, however, they are often more expensive (as of early 2011). PUR is stronger, but less flexible than latex. • It is used more and more often as a coating for silicone implants because the tissue bonds well with it. • The first production vehicle with a complete polyurethane body is the Artega GT . • Many process steps are necessary for the manufacture of semiconductor wafers . In order to ensure an even surface, the wafers are repeatedly polished in the meantime (see CMP, chemical mechanical polishing). In most cases, the polishing plate consists of a polyurethane-coated plastic. Small polishing particles that are placed between the polishing pad and the wafer ensure abrasion. • In the manufacturing jewelry industry, PU is used as an insert for various chains (neck, hand and ankle chains), which creates a special look. • The running surfaces of inline skates, skateboard rollers and roller coasters are made of PU, some of them are also made of conveyor belts and conveyor belts. The PU largely determines the running properties of the rollers. • Bushings of skateboard axles are also made of PU. • Shoe soles and standing mats in the health sector. PU makes them soft and elastic. • One of the two components of the Alcantara imitation leather . • In cosmetics as a component of color cosmetics, skin care, hair care and sun protection products. • Block material : NECURON, obomodulan, Ureol, Raku-Tool, RenShape • Sealing compound: Betamate, Sikaflex, Dymonic NT, Raku Pur • Fibers : Elastane (Spandex), Lycra, Dorlastan • Rigid foams : steinothan, BauderPIR, Baytherm, Baydur, Elastolit • Adhesives : Baycoll, Beli-Zell, Desmocoll, Sikaflex, Gorilla Glue, Delo-Pur • Cosmetics: Baycusan ( microplastic ) • Lacquers and coatings : Lupranol, Lupranat, Bayhydrol, Bayhydur, Sikafloor, Desmodur / Desmophen (= DD lacquers), Voranol, Voranate, Suprasec, Basonat, Sovermol, Tolonate, Duranate • Membranes: Dermizax • Polyester-urethane rubber : Baytec, Cellasto, Vulkollan , Elasturan, Sylomer, Sylodyn, Urepan , Regufoam • PU films: Walopur, Walotex, Platilon • Thermoplastic polyurethanes: Elastollan, Desmopan • Potting compounds : Arathane (electronics), Baygal / Baymidur (electrical and electronic potting compounds ), Bectron (electronics), Elastocoat, Fermadur, RAKU PUR potting compound (electronics), Stobicast (electrical engineering, electronics), WEVO potting compound (electronics), Wepuran- Casting compound (electronics) • Soft foams : Bayflex, Elastoflex, Elastofoam, Fermapor K31, Plasthan, RAKU PUR sealing foam ## Norms • EN 13165 Thermal insulation products for buildings - Factory made rigid polyurethane foam (PU) products - Specification . ## literature • Reinhard Leppkes: Polyurethanes - a material with many faces . 5th edition. Verlag Moderne Industrie, 2003 ISBN 3-478-93100-2 . • Karl Oberbach: Saechtling plastic pocket book . 28th edition. Hanser, 2001 ISBN 3-446-21605-7 . • Günter Oertel (Ed.): Kunststoff-Handbuch - Vol. 7 Polyurethane. 3. Edition. Carl Hanser Verlag, 1993 ISBN 3-446-16263-1 . • DC Allport, DS Gilbert, SM Outterside (Eds.): MDI and TDI: safety, health and environment. A source book and practical guide. John Wiley & Sons Ltd., 2003, ISBN 0-471-95812-3 . • Karl F. Berger, Sandra Kiefer (eds.): Seal Technology Yearbook 2007. ISGATEC, Mannheim 2006, ISBN 978-3-9811509-0-2 . • Konrad Uhlig: Polyurethane pocket book: with 34 tables , Hanser-Verlag, Munich and Vienna 3rd edition 2006, ISBN 978-3-446-40307-9 . • Karl Hübner: 75 years of polyurethanes - “You are probably not the right man” . In: Chemistry in Our Time . tape 46 , no. 2 , 2012, p. 120–122 , doi : 10.1002 / ciuz.201290014 . • Bodo Müller, Walter Rath: Formulation of adhesives and sealants Vincentz Network, Hannover, Germany, 2nd edition 2009, ISBN 978-3-86630-818-3 ## Individual evidence 1. Otto Bayer: The di-isocyanate polyaddition process (polyurethanes) . In: Angewandte Chemie . 59, No. 9, 1947, pp. 257-72. doi : 10.1002 / anie.19470590901 . 2. patent DE728981 : Inventor: IG Farben , 1937. 3. ^ Raymond B. Seymour, George B. Kauffman: Polyurethanes: A class of modern versatile materials . In: Journal of Chemical Education . 69, No. 11, 1992, p. 909. bibcode : 1992JChEd..69..909S . doi : 10.1021 / ed069p909 . 4. G. Avar: Polyurethane (PUR). In: plastics. No. 10, 2008, pp. 205-211. 5. ISOPA - European Association of Diisocyanate and Polyol Manufacturers. 6. ^ Wolfgang Kaiser : Synthetic chemistry for engineers. 2nd edition, Hanser, Munich 2006, ISBN 978-3-446-41325-2 . 7. PUR, if possible without VOC Vincentz-Verlag, Farbe & Lack, 05/2006, page 36. Retrieved on October 29, 2017. 8. M. Rampf, O. Speck, T. Speck, RH Luchsinger: Investigation of a fast mechanical self-repair mechanism for inflatable structures. In: International Journal of Engineering Science. 63, 2013, p. 61, doi : 10.1016 / j.ijengsci.2012.11.002 . 9. unknown: products. In: Adhäsion KLEBEN & DICHTEN. 56, 2012, p. 46, doi : 10.1365 / s35145-012-0099-1 .
2021-07-25 08:39:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6080438494682312, "perplexity": 10102.563136650604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151641.83/warc/CC-MAIN-20210725080735-20210725110735-00158.warc.gz"}
https://www.expii.com/t/differentiating-an-integral-function-using-chain-rule-9183
Expii # Differentiating an Integral Function Using Chain Rule - Expii To differentiate an integral function $$\int_{g(x)}^{h(x)} f(t)\,dt$$ with varying endpoints $$g(x),h(x)$$, you can use one FTC together with the chain rule.
2022-12-08 10:44:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9968159198760986, "perplexity": 721.5708794777258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711286.17/warc/CC-MAIN-20221208082315-20221208112315-00330.warc.gz"}
https://infoscience.epfl.ch/record/103763
## Extrapolation et synthèse Let G a locally compact group, H a closed subgroup and 1 < p < ∞. It's well-known that the restriction of the functions from G to H is a surjective linear contraction from Ap(G) onto Ap(H). We prove, when H is amenable, that every element in Ap(H) with compact support can be extended to an element in Ap(G) of which we can check norm and support. This result is already known in the case of normal subgroups and also for compact subgroups. We obtain the existence of a quasi-coretract in the BAN category, as a substitute of a morphism ΓH such that ResH &#9702; ΓH = idAp(H). Indeed, for an amenable subsgroup, the morphism ΓH, a priori, doesn't exist. So, we construct a net of morphismes in BAN from Ap(H) into Ap(G), that converge to idAp(H) for the strong operator's topology on Ap(H) (that's for us the notion of a quasi-coretract in BAN). Furthermore, if H is metrizable and σ-compact we obtain, more precisely, a sequence. Moreover, our approach allows us to extend to the non-abelian case some works of H. Reiter and C. Herz concerning the spectral synthesis of bounded uniformly continuous functions. My results are new even for the Fourier algebra. Derighetti, Antoine Year: 2007 Publisher: Lausanne, EPFL Keywords: Other identifiers: urn: urn:nbn:ch:bel-epfl-thesis3824-4 Laboratories: Note: The status of this file is: EPFL only
2018-07-20 05:17:59
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8847213387489319, "perplexity": 1191.4191483592888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591497.58/warc/CC-MAIN-20180720041611-20180720061611-00105.warc.gz"}
https://tex.stackexchange.com/questions/204173/catchfilebetweentags-bug-with-verb-command
# catchfilebetweentags bug with \verb command With example \RequirePackage{filecontents} \begin{filecontents*}{main.tex} %<*example> \verb|This is an example| %</example> \end{filecontents*} \documentclass{article} \usepackage{catchfilebetweentags} \CatchFileBetweenTags{\test}{main.tex}{example} \begin{document} \test \end{document} I get output: ! LaTeX Error: \verb ended by end of line. Everything else is working properly. If I copy/paste the piece of code directly in the main.tex file it works properly. • the catch commands take a fragment of the file and pass them as an argument of a macro, it is a documented restriction that \verb doesn't work in the argument of another command, – David Carlisle Oct 2 '14 at 8:16 The \verb command works only if the the argument is not already tokenized. The classical situation this shows up in is that you cannot use \verb in the argument of another command. However, the same problem applies here: to save the input to a macro, the 'caught' information has to be tokenized. That can't work, I'm afraid.
2020-08-14 15:37:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9117912650108337, "perplexity": 2163.099517235361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739328.66/warc/CC-MAIN-20200814130401-20200814160401-00006.warc.gz"}