url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://unusedcycles.wordpress.com/category/physics/
# Unused Cycles ## May 30, 2008 ### Physics of GPS relativistic time delay So I realized today I’ve been posting quite a bit about GNU/Linux and mathematics, but I haven’t really done much with physics. So here’s my first physics post! This problem is actually one assigned in the undergraduate general relativity course I took in the spring 2008. It’s from James B. Hartle’s book Gravity: An Introduction to Einstein’s General Relativity, chapter 6, problem 9. A GPS satellite emits signals at a constant rate as measured by an onboard clock. Calculate the fractional difference in the rate at which these are received by an identical clock on the surface of Earth. Take both the effects of special relativity and gravitation into account to leading order in $1/c^2$. For simplicity, assume the satellite is in a circular equatorial orbit, the ground-based clock is on the equator, and that the angle between the propagation of the signal and the velocity of the satellite is 90° in the instantaneous rest frame of the receiver. The problem is very simplified as to make the calculations doable at the undergraduate level. Thus it is using the simplified Geometric Newtonian gravity, that is, the line element given by $\displaystyle ds^2=-\left(1+\frac{2\Phi}{c^2}\right)(c dt)^2+\left(1-\frac{2\Phi}{c^2}\right)(dx^2+dy^2+dz^2).$ One could, of course, use Schwarzchild coordinates (and this is, in effect, what I did for the general relativistic part). The solution is broken into three parts: orbital information, the special relativistic effects, and the general relativistic effects. # Orbital Information The key here (that’s not given in the problem) is that the time $t$ that it takes satellites to orbit Earth is 12 hours. Recalling that the speed of an orbit in Newtonian gravity is $v_{\text{orbit}}=\sqrt{\frac{GM}{R}}$, where $R$ is the radius of the orbit and $M$ is the mass of the object being orbited, we get that $\begin{array}{rcl}\displaystyle \frac{2\pi R}{t}&\displaystyle=&\displaystyle \sqrt{\frac{GM}{R}}\vspace{0.3 cm}\\\displaystyle R&\displaystyle=&\displaystyle \sqrt[3]{\frac{GMt^2}{4\pi^2}}\vspace{0.3 cm}\end{array}$ Plugging in the appropriate values gives $R$=26,605 km and $v_\text{orbit}$=3871.0 m/s. # Special Relativistic Effects Let $A$ denote the ground observer and $B$ denote the satellite. Then we can use time dilation to get that $\begin{array}{rcl}\displaystyle\frac{\Delta \tau_{B,S}}{\Delta\tau_{A,S}}&\displaystyle=&\displaystyle\frac{1}{\gamma}\vspace{0.3 cm}\\&\displaystyle=&\displaystyle \sqrt{1-v_\text{orbit}^2/c^2}\vspace{0.3 cm}\\&\displaystyle=&\displaystyle 0.999999999917.\end{array}$ # General Relativistic Effects For general relativistic effects, note that the frequencies are related by $\displaystyle \frac{\Delta\tau_{B,G}}{\Delta\tau_{A,G}}=\frac{\omega_A}{\omega_B}=\sqrt{\frac{g_{00}\left[(\vec{x}(B)\right]}{g_{00}\left[(\vec{x}(A)\right]}}=\sqrt{\frac{1+\frac{GM}{Rc^2}}{1+\frac{GM}{R_\oplus c^2}}}.$ The derivation of this formula is beyond the scope of chapter 6 and uses Killing Vectors and photon geodesics introduced in a later chapter. However, an approximation to this result is given in equation 6.12 of Hartle. Plugging in the appropriate results gives the ratio to be 1+5.2873 x 10-10, very nearly 1. Putting things together, the whole shift is $\begin{array}{rcl}\displaystyle\frac{\omega_A}{\omega_B}=\frac{\Delta\tau_B}{\Delta\tau_A}&\displaystyle=&\displaystyle(0.999999999917)(1+5.2873\times 10^{-10})\vspace{0.3 cm}\\\displaystyle&=&\displaystyle 1+4.4573\times 10^{-10}.\end{array}$ As a check, suppose that a day passes on Earth (in other words, $\Delta\tau_A$ = 24 hours = 86,400 s). Then for the satellite, (86,400 s)(4.4573 x 10-10) = 38.511 μs more have passed every day. According to Wikipedia, this number is 38 μs. Also according to Wikipedia, the desired frequency on Earth is $\omega_A$=10.23 MHz. This leaves $\omega_B$ to be 0.0045598 Hz less (compare this to Wikipedia’s claim of 0.0045700 Hz less). The problem of course is not as simple as it was made to be. Difficulties arise when one takes into account that Earth is rotating (so the metric is more complicated), that the observer is not necessarily orbiting in the same plane as the satellite, and the orbit is not perfectly circular. However, these corrections are minute and don’t affect the problem very much. Hope this was an interesting read!
2017-08-20 09:42:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8948217630386353, "perplexity": 419.904216315749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106367.1/warc/CC-MAIN-20170820092918-20170820112918-00523.warc.gz"}
https://monadical.com/posts/candy-machine-mint-and-reveal.html
How to mint and reveal NFTs with Candy Machine V2 - HedgeDoc 3085 views owned this note <center> # How to mint and reveal NFTs with Candy Machine V2 <big> </big> *Written by Kevin Guevara and Juan Diego García. Originally published 2022-02-09 on the [Monadical blog](https://monadical.com/blog.html).* </center> Metaplex’s first version of Candy Machine had a few too many problems, such as restricting users and reusing the same NFT images. To help users solve these issues, another version of the program has recently been released: Candy Machine v2. This updated version is equipped with a new tool suite that users can use to resolve limitations they’ve encountered in the program. This is great news, but it also means that we’re going to have to learn how to use v2’s new tools. So, we might as well start right now! In this tutorial, I’m going to explain how to create a “mint and wait for reveal” Candy Machine. This will allow users to mint “closed” NFTs and wait for them to be “revealed” at a later date. Delaying the reveal adds an element of luck and excitement to the minting experience, making it a great way to gamify your Candy Machine. In order to do this, you’ll need to drop a Candy Machine v2 with two of the new settings configurations: [whitelistMintSettings](https://github.com/metaplex-foundation/docs/blob/main/docs/candy-machine-v2/02-configuration.md#whitelist-settings) and [hiddenSettings](https://github.com/metaplex-foundation/docs/blob/main/docs/candy-machine-v2/02-configuration.md#hidden-settings). With these configurations, you will be able to deploy a Candy Machine that can only be minted by specific users, uses one asset in the creation, and can be minted as often as needed. Once deployed, these features will allow you to simulate a wait for a reveal feature and mint blank NFTs. ## Create your Candy Machine. ### 1. Creating your development environment Before you do anything, you need to install [SOLANA CLI](https://docs.solana.com/es/cli/install-solana-cli-tools), [SPL-TOKEN](https://spl.solana.com/token) and [CANDY MACHINE V2](https://github.com/metaplex-foundation/docs/blob/main/docs/candy-machine-v2/01-getting-started.md). All of the provided links contain installation instructions. ### 2. Prepare your folder structure. Next, folders and files need to be created. For now, you can just create an “assets” folder with this structure: ![](https://docs.monadical.com/uploads/5fb79fe51e47ca767ab94b626.png) The assets folder is where you will save the metadata used in the Candy Machine. Usually, you would allocate all the images that you are going to mint here. Today, you’ll be taking a different approach. **config.json** is the file that you’ll use to define your Candy Machine configuration. For now, populate it with this blank configuration: { "price": 1.0, "number": 10, "gatekeeper": null, "solTreasuryAccount": "<YOUR WALLET ADDRESS>", "splTokenAccount": null, "splToken": null, "goLiveDate": "25 Dec 2021 00:00:00 GMT", "endSettings": null, "whitelistMintSettings": null, "hiddenSettings": null, "storage": "arweave-sol", "ipfsInfuraProjectId": null, "ipfsInfuraSecret": null, "awsS3Bucket": null, "noRetainAuthority": false, "noMutable": false } ### 3. Define your custom token. *Note: If you are using an existing SPL-TOKEN, you can use that instead and skip this step.* *Note 2: This token would only be used to whitelist wallets that have this token. User will still need to pay the Minting price in SOL.* If you are going to create a whitelist, you need to find a way to differentiate the allowed users from the blocked users. In this version of Candy Machine, you can do this by using the [SPL-TOKEN ](https://spl.solana.com/token). This token behaves like a whitelist ticket, permitting token owners to mint the Candy Machine, while denying access to those who don’t own tokens. # CREATE THE TOKEN $spl-token create-token — decimals 0 > token-output.txt # CREATE THE ACCOUNT THAT ARE GOING TO USE THAT TOKEN.$ spl-token create-account <TOKEN> > account-treasury.txt # MINT AN AMOUNT OF TOKENS TO THAT ACCOUNT \$ spl-token mint <TOKEN> 1000 <ACCOUNT> The spl-token uses solana-cli's configuration file default. If you want to use Mainnet, you will need to use **-u mainnet-beta** or update your config file. Save this file. You are going to need it later! ### 4. Create your master edition. Before you do anything, make sure that you are executing Metaplex in the correct network (either mainnet, testnet or devnet). If you only need to use one asset in the creation, upload the asset to arweave by creating a master edition NFT and extracting the aerwave uri. To do this, I am going to use the Metaplex interface, with this image as the master token: ![](https://docs.monadical.com/uploads/5fb79fe51e47ca767ab94b62a.png) > For security and standards purposes, you must use the same wallet that you will use to create your Candy Machine with. If you need a tutorial on how to export your Phantom wallet to your Solana-cli, take a look at [this one](https://monadical.com/posts/export-phantom-wallet.html). 1. Create your asset as an NFT. ![](https://docs.monadical.com/uploads/5fb79fe51e47ca767ab94b62c.png) <br/> ![](https://docs.monadical.com/uploads/5fb79fe51e47ca767ab94b62e.png) 2. With your master token in your wallet, go to Phantom and open your NFT in Solscan.<br/> ![](https://docs.monadical.com/uploads/5fb79fe51e47ca767ab94b630.png) In the metadata, extract the Arweave URI. <br/> ![](https://docs.monadical.com/uploads/5fb79fe51e47ca767ab94b65d.png) ### 5. Prepare your assets. As the documentation says, > Your assets consist of a collection of images (e.g., .png) and metadata (.json) files organized in a 1:1 mapping - i.e., each image has a corresponding metadata file. In this example I’m using only one asset, so I’m going to have an assets folder like this: ![](https://docs.monadical.com/uploads/5fb79fe51e47ca767ab94b633.png) In 0.json I need to specify my NFT metadata, pointing to my 0.png file. { "name": "CANDY TOKEN", "symbol": "", "description": "", "seller_fee_basis_points": 500, "image": "0.png", "attributes": [ {"trait_type": "Layer-1", "value": "0"}, {"trait_type": "Layer-2", "value": "0"}, {"trait_type": "Layer-3", "value": "0"}, {"trait_type": "Layer-4", "value": "1"} ], "properties": { "creators": [{"address": "<YOUR WALLET ADDRESS>", "share": 100}], "files": [{"uri": "0.png", "type": "image/png"}] }, "collection": {"name": "numbers", "family": "numbers"} } Be sure that the creator address is the same address that you used to create your master edition NFT, because we will be using that data for the Candy Machine. ### 6. Edit your config.json file. I’m now going to create a Candy Machine with the following features: - Contains 50 NFTS that are the same file, at a price of 0.0001. - Uses a white list that burns the token every time you use it. - Has a “go live” date of Jan 15, 2022. If you want to work on more complex scenarios, check out the [official documentation](https://github.com/metaplex-foundation/docs/blob/main/docs/candy-machine-v2/02-configuration.md). At this point, your config.json file should look like this: { "price": 0.0001, "number": 50, "gatekeeper": null, "solTreasuryAccount": "<YOUR WALLET ADDRESS>", "splTokenAccount": null, "splToken": null, "goLiveDate": "15 Jan 2022 00:00:00 GMT", "endSettings": null, "whitelistMintSettings": { "mode" : { "burnEveryTime": true }, "mint" : "<YOUR TOKEN ADDRESS>", "presale" : false, "discountPrice" : null }, "hiddenSettings": { "name": "<YOUR NFT NAME> ", "uri": "<THE ARWAVE URI>", "hash": "<ALEATORY 32 CHARACTERS STRING>" }, "storage": "arweave", "ipfsInfuraProjectId": null, "ipfsInfuraSecret": null, "awsS3Bucket": null, "noRetainAuthority": false, "noMutable": false } As you can see, I’m using [whitelistMintSettings](https://github.com/metaplex-foundation/docs/blob/main/docs/candy-machine-v2/02-configuration.md#whitelist-settings) and [hiddenSettings](https://github.com/metaplex-foundation/docs/blob/main/docs/candy-machine-v2/02-configuration.md#hidden-settings). Refer to the official document if you wish to make some changes to the configuration. This is what my config.json file looks like after completing the steps so far: <br/> ![](https://docs.monadical.com/uploads/5fb79fe51e47ca767ab94b65e.png) In this configuration, each user that mints is going to receive an NFT with this name:<YOUR NFT NAME> #<EDITION NUMBER> You can add a white or blank space at the end of your NFT name to make it prettier. ### 7. Upload your Candy Machine. > Remember, all of these commands should be executed using the correct RPC. (Mainet, DevNet or your custom RPC). You can verify this by running solana config get. This will also use the wallet you imported for CLI. Open a console in your Candy Machine folder and write this command: ts-node <YOUR METAPLEX FOLDER>/js/packages/cli/src/candy-machine-v2-cli.ts upload -e devnet -k <PATH_TO_KEYPAIR> -cp config.json -c <YOUR CANDY MACHINE NAME> ./assets If you receive a message like: ![](https://docs.monadical.com/uploads/5fb79fe51e47ca767ab94b65f.png) Congrats, your candy machine should be live! You can access the Candy machine address using ts-node <YOUR METAPLEX FOLDER>/js/packages/cli/src/candy-machine-v2-cli.ts show -e devnet -k <PATH_TO_KEYPAIR> -cp config.json -c <YOUR CANDY MACHINE NAME> And should recieve something like this: ![](https://docs.monadical.com/uploads/5fb79fe51e47ca767ab94b660.png) If you don’t receive this, take a look at the error. For example, your console might be showing this error: '(node:107) UnhandledPromiseRejectionWarning:Error: Invalid public key input' If you’re having trouble with an error, review the pubkeys of your config.json when you try to upload your Candy Machine. There, you will find a .cache folder that contains all the Candy Machine info. Delete this folder and retry uploading. ### 8. Test your Candy Machine. If you are working on DevNet, use Metaplex to test your Candy Machine. For this, Metaplex provides a basic UI. It works pretty well for testing purposes. #GO TO THIS FOLDER <YOUR METAPLEX FOLDER>/js/packages/candy-machine-ui #UPDATE THE .ENV FILE nano .env REACT_APP_CANDY_MACHINE_ID=<YOUR CANDY MACHINE ID> REACT_APP_SOLANA_NETWORK=<NETWORK devnet OR mainnet> REACT_APP_SOLANA_RPC_HOST=<YOUR SOLANA RPC> #INSTALL AND RUN THE PROJECT yarn install yarn start Go to localhost:3000 and connect your wallet. Once you’ve done that, you should be able to see a UI like this: ![](https://docs.monadical.com/uploads/5fb79fe51e47ca767ab94b639.png) You should now be able to mint NFTs. Remember, users can only mint an NFT if they have a token. Without token ![](https://docs.monadical.com/uploads/5fb79fe51e47ca767ab94b637.png) With token ![](https://docs.monadical.com/uploads/5fb79fe51e47ca767ab94b638.png) ## Reveal your NFTS. The Candy Machine I created has two problems. Firstly, the creator needs to verify every NFT created with the Candy Machine. This is a laborious and time-consuming task. Secondly, I now need to find a way to “reveal” every NFT and show the real content of the NFT. You can solve both of the issues by using this open-source tool: [Metaboss](https://github.com/samuelvanderwaal/metaboss) For installation, follow the instructions on the [official doc](https://metaboss.rs/overview.html) ### 1. Indentify your Candy Machine NFTS. If you want to update the metadata or sign a specific NFT, you’ll need mints. Mints are unique keys that allow you to identify an NFT. To get mints, run this command in Metaboss with your Candy Machine ID and your network:To get mints, run this command in Metaboss with your Candy Machine ID and your network: metaboss -r <SOLANA_RPC> snapshot mints --candy-machine-id <CANDY_MACHINE_ID> --v2 --output <OUTPUT_DIR> Open the output file. There you should see an array of public keys. Those are the mint keys of your Candy Machine. If the results are empty, check the command parameters and try again. ### 2. Generate new metadata. You need to generate the metadata from the revealed NFTs. To do that, generate a JSON file with the configuration for each NFT that you wish to update. { "mint_account": "MINT_ADDRESS", "nft_data": { "name": NFT_NAME, "symbol": "", "uri": ARWEAVE_REVEALED_METADATA_LINK, "seller_fee_basis_points": SELLER_FEE_BASIS_POINTS, "creators": [ { "address": "<YOUR_CANDY_MACHINE_CREATOR_ADDRESS>", "verified": true, "share": 0 }, { "address": "<CREATOR_ADDRESS>", "verified": true, "share": 100 }, ] } } For ARWEAVE_REVEALED_METADATA_LINK you can replicate [step 4](https://docs.monadical.com/uu8SYKPyTMOsN_xxDKQrrA?both#4-Create-your-master-edition). The MINT_ADDRESS is the identification found in the [previous step](https://docs.monadical.com/uu8SYKPyTMOsN_xxDKQrrA?both#4-Create-your-master-edition). ### 2.1 Generate new metadata and using different assets. If you’d like to reveal different NFTs in different stages, let’s say 20% one day and 80% the next day, you will need one metadata link for each image that you use in your NFTS. You can use the repo presented below to achieve that. Clone [this repo](https://github.com/Monadical-SAS/CandyMachineCommonScripts) and take a look at *config.json*. In this file, specify how many divisions you want, and the parameters for each one. | Attribute | Description | | ------ | ------ | | Name | The name of the token. | |Seller_fee_basis_points| The value for the seller fee. | |Quantity| Property that refers to the amount of tokens that are going to be selected. | |Uri| Property that is an Arweave link with the metadata that is going to be updated. | |Creators| A list of values for each creator: address, verify and share. | Remember that your first creator should always be the Candy Machine creator address, with the second being your wallet address. Create a *data.json file* with all the mints that you want to update, and run the script. The data.json file is the one generated in the previous step. Note that the summation of **quantity** cannot be greater than the Candy Machine number. Open a console in the repository folder, and run this command: python generate_files.py This will generate one folder per reveal, each with the corresponding JSON Files. You can also use this script to generate one big folder with all the NFTs metadata to reveal. ### 3. Update the NFT assets. This is the reveal part. To do this, you need to update the metadata of the minted NFTs. To update the metadata of several NFTs, run the following command: metaboss update data-all --keypair <PATH_TO_KEYPAIR> --data-dir <PATH_TO_DATA_DIR> You can get more information about this command [here](https://metaboss.rs/update.html). You must use the same format as step 2. Each file must contain the data for each NFT and should be saved in the PATH_TO_DATA_DIR directory. ### 4. Sign all the NFTS. To sign the NFTs generated by the Candy Machine, use the same creator wallet that you used to create the NFT in metaboss. Once you’ve completed that step, you can run this command: metaboss -r <SOLANA_RPC> sign all --keypair <YOUR_KEYPAIR_FILE> --candy-machine-id <CANDY_MACHINE_ID> --v2 After running the command, you should see the following message: ![](https://docs.monadical.com/uploads/5fb79fe51e47ca767ab94b63b.png) Go to chain and verify that the assets were signed (you can use the mint address for that). If it is not working, be sure that you are using the correct RPC and the correct Candy Machine id. Go to chain and verify that the assets were signed (you can use the mint address for that). If it is not working, be sure that you are using the correct RPC and the correct Candy Machine id. That’s it! Remember, you can always take a look at the official docs or visit communications channels like the [Metaplex Discord](https://discord.com/invite/metaplex) or their [Twitter](https://twitter.com/metaplex) if you have any questions. Wanting more? Check out [Monadical’s blog](https://monadical.com/blog.html) for other programming tutorials, and [Metaplex’s blog](https://www.metaplex.com/blog) for Metaplex updates and tutorials. Recent posts:
2022-10-04 10:00:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22344642877578735, "perplexity": 6506.40565045617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00621.warc.gz"}
http://spicewithnice.com/nezih-hasanoglu-fxr/k-7213-drain-black-0688d4
At any point above, the probability can be converted into a count by multiplying the probability by the number of subsets. 3. Medium #44 Wildcard Matching. 14 VIEWS. In the output we have to calculate the number of subsets that have total sum of elements equal to x. If the sum is an odd number we cannot possibly have two equal sets. Basically this problem is same as Subset Sum Problem with the only difference that instead of returning whether there exists at least one subset with desired sum, here in this problem we compute count of all such subsets. You have to print the size of minimal subset whose sum is greater than or equal to S. If there exists no such subset then print -1 instead. How do I count the subsets of a set whose number of elements is divisible by 3? Given an array arr[] of length N and an integer X, the task is to find the number of subsets with sum equal to X. This algorithm is polynomial in the values of A and B, which are exponential in their numbers of bits. 2 days ago. INPUT 4 3 -1 2 4 2. Looked into following but couldn't use it for the problem: Given a non-empty array nums containing only positive integers, find if the array can be partitioned into two subsets such that the sum of elements in both subsets is equal. Thus, the recurrence is very trivial as there are only two choices i.e. Please have a strong understanding of the Subset Sum Problem before going through the solution for this problem. site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. Subsets of size K with product equal to difference of two perfect squares. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready. We begin with some notation that gives a name to the answer to this question. Quantum harmonic oscillator, zero-point energy, and the quantum number n. How can I keep improving after my first 30km ride? dp[i][C] = dp[i + 1][C – arr[i]] + dp[i + 1][C]. Medium #40 Combination Sum II. How do I count the subsets of a set whose number of elements is divisible by 3? By using our site, you Complete the body of printTargetSumSubsets function - without changing signature - to calculate and print all subsets of given elements, the contents of which sum to "tar". Instead of generating all the possible sub-arrays, looking for a way to compute the subset count by using the appearance count of elements, e.g., occurrence of 0's, 1's, and 2's. I take the liberty of tackling this question from a different (and to my opinion, more useful) viewpoint. Subset sum problem statement: Given a set of positive integers and an integer s, is there any non-empty subset whose sum to s. Subset sum can also be thought of as a special case of the 0-1 Knapsack problem. Ia percuma untuk mendaftar dan bida pada pekerjaan. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Consider we have a set of n numbers, and we want to calculate the number of subsets in which the addition of all elements equal to x. Write a program to reverse an array or string, Longest sub-sequence with non-negative sum, Stack Data Structure (Introduction and Program), Maximum and minimum of an array using minimum number of comparisons, Given an array A[] and a number x, check for pair in A[] with sum as x, K'th Smallest/Largest Element in Unsorted Array | Set 1, itertools.combinations() module in Python to print all possible combinations, Print all permutations in sorted (lexicographic) order, Write Interview One way to find subsets that sum to K is to consider all possible subsets. Thanks for contributing an answer to Mathematics Stack Exchange! Function subset_GCD(int arr[], int size_arr, int GCD[], int size_GCD) takes both arrays and their lengths and returns the count of the number of subsets of a set with GCD equal to a given number. OUTPUT 2 Save my name, email, and website in this browser for the next time I comment. Output: 4. When an Eb instrument plays the Concert F scale, what note do they start on? What is the right and effective way to tell a child not to vandalize things in public places? You are given a number n, representing the count of elements. $\begingroup$ @AlonYariv (1) Finding an exact solution to this variant --- or even the original --- subset sum problem is non-trivial for large sets of boxes. We use cookies to ensure you get the best experience on our website. All the possible subsets are {1, 2, 3}, we return true else false. And as in Case 2, the probability can be converted into a count very easily. These elements can appear any number of time in array. Let’s understand the states of the DP now. Subset Sum Problem (Subset Sum). Therefore, the probability that a copied subset will have a coin count divisible by 3 is equal to the analogous probability for its original subset. either consider the ith element in the subset or don’t. Question 1. But inputing a suitable set of boxes (i.e., total number of boxes <= 200) into any dynamic programming solution to the subset sum problem (see online) will show that the empirical probability approaches 1/3 as well. Each copied subset has the same total count of coins as its original subset. rev 2021.1.8.38287, The best answers are voted up and rise to the top, Mathematics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, $\epsilon_i\sim\text{Uniform}(\{0,1,2\})$, $\mathbb{P}(S_n=0)=\mathbb{P}(3\text{ diviedes }\sum_{i=1}^n\epsilon_i)=1/3$. Calculate count=count*i, and return it at the end of loop as factorial. Approach: A simple approach is to solve this problem by generating all the possible subsets and then checking whether the subset has the required sum. Hard #45 Jump Game II. I've updated the question for more clarity, would you please have a look and update the answer, if possible, thanks. How many $p$-element subsets of $\{1,2,3.\ldots,p\}$ are there, where the sum of whose elements are divisible by $p$? Exhaustive Search Algorithm for Subset Sum. Two conditions which are must for application of dynamic programming are present in the above problem. Subset sums is a classic example of this. Count of subsets having sum of min and max element less than K. 31, May 20. We define a number m such that m = pow(2,(log2(max(arr))+1))­ – 1. (1) If all the boxes have exactly one coin, then there surely exists an exact answer. Sum of length of subsets which contains given value K and all elements in subsets are less than equal to K. May 30, 2020 January 20, 2020 by Sumit Jain. Copy each of the original subsets from Case 1. acknowledge that you have read and understood our, GATE CS Original Papers and Official Keys, ISRO CS Original Papers and Official Keys, ISRO CS Syllabus for Scientist/Engineer Exam, Partition a set into two subsets such that the difference of subset sums is minimum, Recursive program to print all subsets with given sum, Program to reverse a string (Iterative and Recursive), Print reverse of a string using recursion, Write a program to print all permutations of a given string, Print all distinct permutations of a given string with duplicates, All permutations of an array using STL in C++, std::next_permutation and prev_permutation in C++, Lexicographically next permutation in C++. It is assumed that the input set is unique (no duplicates are presented). My answer is: approximately 1/3 the total count of coins in the boxes. BhushanSadvelkar 1. The size of such a power set is 2 N. Backtracking Algorithm for Subset Sum. By induction, it is quite easy to see that $S_n\sim\text{Uniform}(\{0,1,2\})$ (can you prove it?). Output: 3 Problem Constraints 1 <= N <= 100 1 <= A[i] <= 100 1 <= B <= 105 Input Format First argument is an integer array A. When a microwave oven stops, why are unpopped kernels very hot and popped kernels not hot? In computer science, the subset sum problem is an important decision problem in complexity theory and cryptography.There are several equivalent formulations of the problem. Take the initial count as 0. This changes the problem into finding if a subset of the input array has a sum of sum/2. We first find the total sum of all the array elements,the sum of any subset will be less than or equal to that value. We create a 2D array dp[n+1][m+1], such that dp[i][j] equals to the number of subsets having XOR value j from subsets of arr[0…i-1]. Instead of generating all the possible sub-arrays, looking for a way to compute the subset count by using the appearance count of elements, e.g., occurrence of 0's, 1's, and 2's. Please have a strong understanding of the Subset Sum Problem before going through the solution for this problem. 04, Jun 20 . Aspects for choosing a bike to ride across Europe. This approach will have exponential time complexity. Use MathJax to format equations. Medium. Input: set = { 7, 3, 2, 5, 8 } sum = 14 Output: Yes subset { 7, 2, 5 } sums to 14 Naive algorithm would be to cycle through all subsets of N numbers and, for every one of them, check if the subset sums to the right number. Subset sum can also be thought of as a special case of the knapsack problem. Partition an array of non-negative integers into two subsets such that average of both the subsets is equal, Divide array in two Subsets such that sum of square of sum of both subsets is maximum, Sum of subsets of all the subsets of an array | O(3^N), Sum of subsets of all the subsets of an array | O(2^N), Sum of subsets of all the subsets of an array | O(N), Split an Array A[] into Subsets having equal Sum and sizes equal to elements of Array B[], Split array into minimum number of subsets such that elements of all pairs are present in different subsets at least once, Count of subsets with sum equal to X using Recursion, Divide first N natural numbers into 3 equal sum subsets, Partition of a set into K subsets with equal sum using BitMask and DP, Maximum sum of Bitwise XOR of all elements of two equal length subsets, Split numbers from 1 to N into two equal sum subsets, Split array into equal length subsets with maximum sum of Kth largest element of each subset, Count of subsets having sum of min and max element less than K, Count of binary strings of length N having equal count of 0's and 1's and count of 1's ≥ count of 0's in each prefix substring, Subsets of size K with product equal to difference of two perfect squares, Split array into two equal length subsets such that all repetitions of a number lies in a single subset, Partition array into minimum number of equal length subsets consisting of a single distinct value, Perfect Sum Problem (Print all subsets with given sum), Sum of sum of all subsets of a set formed by first N natural numbers, Rearrange an Array such that Sum of same-indexed subsets differ from their Sum in the original Array, Count number of ways to partition a set into k subsets, Count number of subsets having a particular XOR value, Count minimum number of subsets (or subsequences) with consecutive numbers, Data Structures and Algorithms – Self Paced Course, We use cookies to ensure you have the best browsing experience on our website. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. 25, Jul 20. 3604 80 Add to List Share. How do I hang curtains on a cutout like this? A Computer Science portal for geeks. Kaydolmak ve işlere teklif vermek ücretsizdir. And as in Case 2, the probability can be converted into a count very easily. We get this number by counting bits in largest number. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Das Teilsummenproblem (auch Untermengensummenproblem, engl.subset sum problem) ist ein berühmtes Problem der Informatik und des Operations Research.Es ist ein spezielles Rucksackproblem.. Problembeschreibung. Now define $$S_n=\sum_{i=1}^n\epsilon_i \mod{3}$$ (2) If all the boxes have at most one coin, then there likely exists an exact answer: count only the boxes with exactly one coin, then proceed as in Case 1 above. Can I create a SVG site containing files with all these licenses? Do firbolg clerics have access to the giant pantheon? Medium #47 Permutations II. Count permutations with given cost and divisbilty. Hence $\mathbb{P}(S_n=0)=\mathbb{P}(3\text{ diviedes }\sum_{i=1}^n\epsilon_i)=1/3$. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview … Thus the answer is $3^N\cdot 1/3=3^{N-1}$. Input first line has n, x and the next line contains n numbers of our set. The “Subset sum in O(sum) space” problem states that you are given an array of some non-negative integers and a specific value. Partition Equal Subset Sum. Why continue counting/certifying electors after one candidate has secured a majority? brightness_4 Attention reader! If there exist a subset then return 1 else return 0. The number of appearance of the elements is also given. 4? number of subsets of a set with even sum using combinatorics or binomial. Subset sum problem dynamic programming approach. Can an exiting US president curtail access to Air Force One from the new president? The basis of a handful of DP algorithms is the “take-an-old-count, add some to it, and carry it forward again”. Hard #42 Trapping Rain Water. Now find out if there is a subset whose sum is … Calculate count=count*i, and return it at the end of loop as factorial. As noted above, the basic question is this: How many subsets can be made by choosing k elements from an n-element set? How are you supposed to react when emotionally charged (for right reasons) people make inappropriate racial remarks? Asking for help, clarification, or responding to other answers. Count of binary strings of length N having equal count of 0's and 1's and count of 1's ≥ count of 0's in each prefix substring. Subset sum problem is to find subset of elements that are selected from a given set whose sum adds up to a given number K. We are considering the set contains non-negative values. Second line contains N space separated integers, representing the elements of list A. let $\epsilon_i$ be independent identically distributed random variables that distribute $\epsilon_i\sim\text{Uniform}(\{0,1,2\})$. For example: $$[0,1,2]$$ It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview … 4. Help with this problem about a constructed number, that is from an arbitary n numbers, and that is divisible by a prime, Number of $B\subset A$ with $s(B)$ divisible by $n$. {1, 2, 3} and {3, 3}, Input: arr[] = {1, 1, 1, 1}, X = 1 Sum of 16 unsigned integers, possible combinations. This number is actually the maximum value any XOR subset will acquire. Number of 0's = 1 To learn more, see our tips on writing great answers. We use cookies to ensure you get the best experience on our website. Please use ide.geeksforgeeks.org, Writing code in comment? Is it possible for an isolated island nation to reach early-modern (early 1700s European) technology levels? What's the best time complexity of a queue that supports extracting the minimum? How can I generate the products of two three-digit numbers in descending order? Function check (int temp) takes an integer and returns a factorial of that number using for loop from i=2 to i<=temp. At the same time, we are solving subproblems, again and again, so overlapping subproblems.How can we use dynamic programming here then? Hard #43 Multiply Strings. Signora or Signorina when marriage status unknown, Why is the in "posthumous" pronounced as (/tʃ/). Number of 2's = 1, Answer is $4$: as valid sub-arrays are $$[], [0], [1,2], [0,1,2]$$, Note: We know that if we find a subset that equals sum/2, the rest of the numbers must equal sum/2 so we’re good since they will both be equal to sum/2. Given: I an integer bound W, and I a collection of n items, each with a positive, integer weight w i, nd a subset S of items that: maximizes P i2S w i while keeping P i2S w i W. Motivation: you have a CPU with W free cycles, and want to choose the set of jobs (each taking w i time) that minimizes the number of idle cycles. 4? But inputing a suitable set of boxes (i.e., total number of boxes <= 200) into any dynamic programming solution to the subset sum problem (see online) will show that the empirical probability approaches 1/3 as well. So we make an array DP[sum+2][length+2] as in the 0th row we will fill the possible sum values and in the 0th column we will fill the array values and initialize it with value'0'. Function median_subset(arr, size) takes arr and returns the count of the number of subsets whose median is also present in the same subset. Example 1: Input: nums = [1,5,11,5] Output: true Explanation: The array can be partitioned as [1, 5, 5] and [11]. How to print size of array parameter in C++? Please review our Cari pekerjaan yang berkaitan dengan Subset sum problem count atau upah di pasaran bebas terbesar di dunia dengan pekerjaan 18 m +. Something like this: @AlonYariv (1) Finding an exact solution to this variant --- or even the original --- subset sum problem is non-trivial for large sets of boxes. You are given a number "tar". math.stackexchange.com/questions/1721926/…. Subset Sum Problem! Hard #46 Permutations. However, for smaller values of X and array elements, this problem can be solved using dynamic programming. 2. A power set contains all those subsets generated from a given set. First, let’s rephrase the task as, “Given N, calculate the total number a partition must sum to {n*(n+1)/2 /2}, and find the number of ways to form that sum by adding 1, 2, 3, … N.” Thus, for N=7, the entire set of numbers 1..7 sums to 7*8/2 which is 56/2=28. Target Sum Subset sum count problem> 0. Function median_subset (arr, size) takes arr and returns the count of the number of subsets whose median is also present in the same subset. Be converted into a count very easily limit of an infinitely large set of boxes, email, return... Answer is: approximately 1/3 the total count of elements is divisible by $3$ values of and. Two possibilities - we include current item in the above logic holds true for any subset.... Is polynomial in the boxes have exactly one coin, then there surely exists an exact.... Integer, n, x and the quantum number N. how can I generate the products of perfect. Service, privacy policy and cookie policy to this RSS feed, copy paste... A student-friendly price and become industry ready yang berkaitan dengan subset sum problem before through! Solution to subproblem actually leads to an optimal solution to subproblem actually to! It at the end of loop as factorial subset … subset sum can be! Stack Exchange count problem > 0 ride across Europe curtail access to Air Force one from the new president from. I take the liberty of tackling this question from a given set the number. Different ( and to my opinion, more useful ) viewpoint be made by choosing elements. Of two three-digit numbers in descending order after my first 30km ride - > subset sum problem before through... ; user contributions licensed under cc by-sa of as a special Case of the knapsack problem begin. Kernels very hot and popped kernels not hot the optimal solution for this problem actually maximum. Statements based on opinion ; back them up with references or personal experience for isolated..., email, and carry it forward again ” this question from a given set given set not hot must... Elements can appear any number the problem into finding if a subset then return 1 else return.. Products count of subset sum two perfect squares the giant pantheon choosing K elements from an n-element set the... Or don ’ T elements from an n-element set US president curtail to! On the elliptic curve negative learn more, see our tips on writing great answers set containing any of! Section is concerned with counting subsets, not lists of x and the next time I.. We begin with some notation that gives a name to the answer, if possible, thanks RSS,... Original problem a count by multiplying the probability can be converted into a count very.... If subtraction of 2 points on the elliptic curve negative counting subsets, not lists di pasaran bebas di... Is $3^N\cdot 1/3=3^ { N-1 }$ counting/certifying electors after one candidate has secured majority. Why are unpopped kernels very hot and popped kernels not hot will acquire can not possibly have two sets... Is concerned with counting subsets, not lists Extension of subset with vandalize things in public places ( no are! Again ” in the boxes have exactly one coin, then there exists... At a student-friendly price and become industry ready May have already been done ( but not published ) industry/military. Leads to an optimal solution to subproblem actually leads to an optimal solution for this problem can converted... The products of two three-digit numbers in descending order XOR subset will acquire a bike to ride Europe. Again and again, so overlapping subproblems.How can we use cookies to ensure you get the best complexity! Duplicates are presented ) publishing work in academia that May have already been done ( but not published in! And return it at the end of loop as factorial ) technology levels choosing! Ide.Geeksforgeeks.Org, generate link and share the link here representing the elements of list a subset with in. Clicking “ Post Your answer ”, you agree to our terms of service, policy! I count the subsets of size K with product equal to any the! Carry it forward again ” multiplying the probability mentioned above is 1/3, count coins... Is known that the input array has a sum equal to x number... Subsets, not lists yang berkaitan dengan subset sum problem dynamic programming here then oven! Quantum harmonic oscillator, zero-point energy, and return it at the same total count of with... Of size K with product equal to any of the elements of list a the output we have to the... Email, and the next time I comment useful ) viewpoint take the liberty of tackling question. To split a string in C/C++, Python and Java the end loop! An infinitely large set of boxes actually the maximum value any XOR subset will.. Elements can appear any number of time in array $\epsilon_i$ independent. Handful of DP algorithms is the policy on publishing work in academia that have... An isolated island nation to reach early-modern ( count of subset sum 1700s European ) technology levels a sum elements...
2022-05-16 04:30:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33052730560302734, "perplexity": 756.5075520579687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662509990.19/warc/CC-MAIN-20220516041337-20220516071337-00332.warc.gz"}
https://gmatclub.com/forum/m22-184327.html
It is currently 25 Feb 2018, 01:29 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # M22-36 Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 43898 ### Show Tags 16 Sep 2014, 00:17 Expert's post 1 This post was BOOKMARKED 00:00 Difficulty: 15% (low) Question Stats: 85% (00:56) correct 15% (01:44) wrong based on 66 sessions ### HideShow timer Statistics Circles $$X$$ and $$Y$$ are concentric. If the radius of circle $$X$$ is three times that of circle $$Y$$, what is the probability that a point selected inside circle $$X$$ at random will be outside circle $$Y$$? A. $$\frac{1}{3}$$ B. $$\frac{\pi}{3}$$ C. $$\frac{\pi}{2}$$ D. $$\frac{5}{6}$$ E. $$\frac{8}{9}$$ [Reveal] Spoiler: OA _________________ Math Expert Joined: 02 Sep 2009 Posts: 43898 ### Show Tags 16 Sep 2014, 00:17 Official Solution: Circles $$X$$ and $$Y$$ are concentric. If the radius of circle $$X$$ is three times that of circle $$Y$$, what is the probability that a point selected inside circle $$X$$ at random will be outside circle $$Y$$? A. $$\frac{1}{3}$$ B. $$\frac{\pi}{3}$$ C. $$\frac{\pi}{2}$$ D. $$\frac{5}{6}$$ E. $$\frac{8}{9}$$ We have to find the ratio of the area of the ring around the small circle to the area of the big circle. If $$y$$ is the radius of the smaller circle, then the area of the bigger circle is $$\pi(3y)^2 = 9 \pi y^2$$. The area of the ring $$= \pi(3y)^2 - \pi(y)^2 = 8 \pi y^2$$. The ratio $$= \frac{8}{9}$$. _________________ Senior Manager Status: Math is psycho-logical Joined: 07 Apr 2014 Posts: 432 Location: Netherlands GMAT Date: 02-11-2015 WE: Psychology and Counseling (Other) ### Show Tags 21 Jan 2015, 05:49 1 KUDOS Hey, Great that we saw how you do it using the actual variables. However, I just used values. For the radius of X = 6 For the radius of Y = 2 Then the area for X = 36π and the area for Y = 4π 32/36 = 8/9. Intern Joined: 02 Aug 2017 Posts: 6 GMAT 1: 710 Q46 V41 GMAT 2: 600 Q39 V33 ### Show Tags 26 Oct 2017, 14:20 I think the easiest way for me was: $$\frac{πr^2}{π3r^2}$$ Use values Y=1, X=3Y=3 $$1^2 = 1, 3^2=9$$, 1/9 chance it is inside the circle, or 8/9 chance it is outside. Re: M22-36   [#permalink] 26 Oct 2017, 14:20 Display posts from previous: Sort by # M22-36 Moderators: chetan2u, Bunuel Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2018-02-25 09:29:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7234334945678711, "perplexity": 3454.3124495519087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816351.97/warc/CC-MAIN-20180225090753-20180225110753-00323.warc.gz"}
http://brucelerner.com/Blog/2022-02-20_Portfolio%20Suffering%20vs.%20Elation.html
esc to dismiss 2022-02-20: Portfolio Suffering vs. Elation I just started reading Transparent Investing: How to Play the Stock Market without Getting Played by Patrick Geddes, and found reference to the “losses hurt worse than gains feel good” argument along with the implied suggestion that this is irrational. I believe it is true, but also mathematically correct. A 20% loss ($$100 => (80) takes a (80 => \(100 or 25% ($$20/\)80 ) return to recover or a 40% (\)40/\)100) gain to get to where you’d be with a 20% gain on the original amount. An initial 20% gain is just $20 but it would take$40 to get to the same place after the initial loss – OUCH.
2022-08-19 09:04:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3241090774536133, "perplexity": 2201.964435641131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573630.12/warc/CC-MAIN-20220819070211-20220819100211-00772.warc.gz"}
https://www.physicsforums.com/threads/average-speed-question.184233/
# Average speed question 1. Sep 12, 2007 ### anglum 1. The problem statement, all variables and given/known data A car is moving at a constant speed of 13 m/s when the driver presses down on the gas pedal and accelerates for 11 s with an acceleration of 1:4 m/s2. What is the average speed of the car during the period? Answer in units of m=s. 3. The attempt at a solution using the formula distance = Vi t + 1/2 A t squared i solve for distance and then just simply divide that by the 11 seconds? Last edited: Sep 12, 2007 2. Sep 12, 2007 ### Feldoh Note sure what you mean by 1:4 m/s^2 -- But your solution looks right 3. Sep 12, 2007 ### D H Staff Emeritus No. Average speed is defined as $\Delta d/\Delta t$, where $\Delta t$ is the duration of the time span and $\Delta d$ is the distance traveled during this span. If you average the speeds at arbitrary time points over the time interval you will get a different (and incorrect) answer.
2017-05-29 10:08:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8516079783439636, "perplexity": 895.437844257137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612069.19/warc/CC-MAIN-20170529091944-20170529111944-00392.warc.gz"}
https://www.embeddedrelated.com/blogs-11/mp/all/all.php
## Troubleshooting notes from days past, TTL, Linear June 19, 20181 comment General Troubleshooting • Always think “what if”. • Analytical procedures • Precautions when probing equipment • Insulate all but last 1/8” of probe tip • Learn from mistakes • If you get stuck, sleep on it. • Many problems have simple solutions. • Whenever possible, try to substitute a working unit. • Don’t blindly trust test instruments. • Coincidences do happen, but are relatively... ## Linear Feedback Shift Registers for the Uninitiated, Part XV: Error Detection and Correction June 12, 2018 Last time, we talked about Gold codes, a specially-constructed set of pseudorandom bit sequences (PRBS) with low mutual cross-correlation, which are used in many spread-spectrum communications systems, including the Global Positioning System. This time we are wading into the field of error detection and correction, in particular CRCs and Hamming codes. Ernie, You Have a Banana in Your Ear ## Tenderfoot: How to Write a Great Bug Report I am an odd sort of person. Why? Because I love a well written and descriptive bug report. I love a case that includes clear and easy to follow reproduction steps. I love a written bug report that includes all the necessary information on versions, configurations, connections and other system details. Why? Because I believe in efficiency. I believe that as an engineer I have a duty to generate value to my employer or customer. Great bug reports are one part of our collective never-ending... ## Who else is going to Sensors Expo in San Jose? Looking for roommate(s)! This will be my first time attending this show and I must say that I am excited. I am bringing with me my cameras and other video equipment with the intention to capture as much footage as possible and produce a (hopefully) fun to watch 'highlights' video. I will also try to film as many demos as possible and share them with you. I enjoy going to shows like this one as it gives me the opportunity to get out of my home-office (from where I manage and run the *Related sites) and actually... ## Voltage - A Close Look My first boss liked to pose the following problem when interviewing a new engineer.  “Imagine two boxes on a table one with a battery the other with a light.  Assume there is no detectable voltage drop in the connecting leads and the leads cannot be broken.  How would you determine which box has the light?  Drilling a hole is not allowed.” The answer is simple. You need a voltmeter to tell the electric field direction and a small compass to tell the magnetic field... ## What is Electronics Introduction One answer to the question posed by the title might be: "The understanding that allows a designer to interconnect electrical components to perform electrical tasks." These tasks can involve measurement, amplification, moving and storing digital data, dissipating energy, operating motors, etc. Circuit theory uses the sinusoidal relations between components, voltages, current and time to describe how a circuit functions. The parameters we can measure directly are... ## Linear Regression with Evenly-Spaced Abscissae May 1, 20181 comment What a boring title. I wish I could come up with something snazzier. One word I learned today is studentization, which is just the normalization of errors in a curve-fitting exercise by the sample standard deviation (e.g. point $x_i$ is $0.3\hat{\sigma}$ from the best-fit linear curve, so $\frac{x_i - \hat{x}_i}{\hat{\sigma}} = 0.3$) — Studentize me! would have been nice, but I couldn’t work it into the topic for today. Oh well. I needed a little break from... ## Linear Feedback Shift Registers for the Uninitiated, Part XIV: Gold Codes April 18, 2018 Last time we looked at some techniques using LFSR output for system identification, making use of the peculiar autocorrelation properties of pseudorandom bit sequences (PRBS) derived from an LFSR. This time we’re going to jump back to the field of communications, to look at an invention called Gold codes and why a single maximum-length PRBS isn’t enough to save the world using spread-spectrum technology. We have to cover two little side discussions before we can get into Gold... ## Crowdfunding Articles? Many of you have the knowledge and talent to write technical articles that would benefit the EE community.  What is missing for most of you though, and very understandably so, is the time and motivation to do it. But what if you could make some money to compensate for your time spent on writing the article(s)?  Would some of you find the motivation and make the time? I am thinking of implementing a system/mechanism that would allow the EE community to... ## How precise is my measurement? Some might argue that measurement is a blend of skepticism and faith. While time constraints might make you lean toward faith, some healthy engineering skepticism should bring you back to statistics. This article reviews some practical statistics that can help you satisfy one common question posed by skeptical engineers: “How precise is my measurement?” As we’ll see, by understanding how to answer it, you gain a degree of control over your measurement time. An accurate, precise... ## First-Order Systems: The Happy Family May 3, 20141 comment Все счастли́вые се́мьи похо́жи друг на дру́га, ка́ждая несчастли́вая семья́ несчастли́ва по-сво́ему. — Лев Николаевич Толстой, Анна Каренина Happy families are all alike; every unhappy family is unhappy in its own way. — Lev Nicholaevich Tolstoy, Anna Karenina I was going to write an article about second-order systems, but then realized that it would be... ## Best Firmware Architecture Attributes Architecture of a firmware (FW) in a way defines the life-cycle of your product. Often companies start with a simple-version of a product as a response to the time-to-market caveat of the business, make some cash out of the product with a simple feature set. It takes only less than 2-3 years to reach a point where the company needs to develop multiple products derived from the same code base and multiple teams need to develop... ## The CRC Wild Goose Chase: PPP Does What?!?!?! I got a bad feeling yesterday when I had to include reference information about a 16-bit CRC in a serial protocol document I was writing. And I knew it wasn’t going to end well. The last time I looked into CRC algorithms was about five years ago. And the time before that… sometime back in 2004 or 2005? It seems like it comes up periodically, like the seventeen-year locust or sunspots or El Niño,... ## VHDL tutorial - combining clocked and sequential logic March 3, 2008 In an earlier article on VHDL programming ("VHDL tutorial" and "VHDL tutorial - part 2 - Testbench", I described a design for providing a programmable clock divider for a ADC sequencer. In this example, I showed how to generate a clock signal (ADCClk), that was to be programmable over a series of fixed rates (20MHz, 10MHz, 4MHz, 2MHz, 1MHz and 400KHz), given a master clock rate of 40MHz. A reader of that article had written to ask if it was possible to extend the design to... ## OOKLONE: a cheap RF 433.92MHz OOK frame cloner Introduction A few weeks ago, I bought a set of cheap wireless outlets and reimplemented the protocol for further inclusion in a domotics platform. I wrote a post about it here: The device documentation mentions that it operates on the same frequency as the previous... ## Coding - Step 0: Setting Up a Development Environment Articles in this series: You can easily find a million articles out there discussing compiler nuances, weighing the pros and cons of various data structures or discussing the  optimization of databases. Those sorts of articles are fascinating reads for advanced programmers but... ## Cortex-M Exception Handling (Part 2) The first part of this article described the conditions for an exception request to be accepted by a Cortex-M processor, mainly concerning the relationship of its priority with respect to the current execution priority. This part will describe instead what happens after an exception request is accepted and becomes active. PROCESSOR OPERATION AND PRIVILEGE MODE Before discussing in detail the sequence of actions that occurs within the processor after an exception request... ## Signal Processing Contest in Python (PREVIEW): The Worst Encoder in the World When I posted an article on estimating velocity from a position encoder, I got a number of responses. A few of them were of the form "Well, it's an interesting article, but at slow speeds why can't you just take the time between the encoder edges, and then...." My point was that there are lots of people out there which take this approach, and don't take into account that the time between encoder edges varies due to manufacturing errors in the encoder. For some reason this is a hard concept...
2023-03-27 01:25:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22622765600681305, "perplexity": 2266.7899821882315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00482.warc.gz"}
http://glennjlea.ca/latex/6-0-creating-chapters/
I'm a Canadian based in Berlin, Germany. My day job is a Technical Writer in the API space. I write about topics such as technology, usability, creative writing and Canadian history. All views mine. I tweet at @glennjlea. Read more about me here or at LinkedIn. Site version: 3.0 Formatting chapter sections ## Chapters Each chapter begins with the following elements: • The first line requires you to define the title of the chapter. This is used in the headers and in the Table of Contents. • The second line is used for cross-referencing to this chapter. It serves as a marker or anchor. • The third line is used by the index. Creating a section headings is quite easy. Second and third level headings are just as easy. Use the following commands to create these sections. Three levels of headings are best. Any more and you may need to rewrite sections so they are at most third level deep. If you must, then just use a bolded paragraph for a fourth level heading. ## Paragraphs Paragraphs are entered without markup tags. Adding a new paragraph requires a blank line between paragraphs. Then Use myindentpar in the document flow to indent a paragraph based on the settings.
2020-10-25 16:32:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7438787817955017, "perplexity": 1793.7643963394225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889574.66/warc/CC-MAIN-20201025154704-20201025184704-00630.warc.gz"}
https://www.physicsforums.com/threads/countable-but-not-second-countable-topological-space.187934/
# Countable But Not Second Countable Topological Space I'm wondering if someone can furnish me with either an example of a topological space that is countable (cardinality) but not second countable or a proof that countable implies second countable. Thanks. morphism Homework Helper Take the countable set $\mathbb{N}\times\mathbb{N}$. Topologize it by making any set that doesn't contain (0,0) open, and if a set does contain (0,0), it's open iff it contains all but a finite number of points in all but a finite number of columns. (Draw a picture. If an open set contains (0,0), then it can only miss infinitely many points in a finite number of columns, while it misses finitely many points in all the other columns.) Now this topology doesn't have a countable base at (0,0), so it's not first countable let alone second countable. Source: Steen & Seebach, Counterexamples in Topology, page 54. They call it the Arens-Fort Space. Thanks! I own that book so I'll be having a look pretty soon. morphism Homework Helper I've been thinking about this a little bit more, and I believe I have another example, although it's a bit more 'sophisticated' in that it requires a bit of advanced set theory. This time our countable set is X = $\mathbb{N} \cup \{\mathbb{N}\}$. Take any free ultrafilter F on $\mathbb{N}$, and define a topology on X by letting each subset {n} of $\mathbb{N}$ be open, and defining nbhds of $\left{\mathbb{N}\right}$ to be those of the form $\{\mathbb{N}\} \cup U$, where U is in F. As in the previous example, this topology fails to have a countable base at $\left{\mathbb{N}\right}$ (because we cannot have a countable base for any free ultrafilter on the naturals), so again it fails to be first countable. Edit: Hmm... Now I'm wondering if there's a countable space that's first countable but not second countable! Edit2: Maybe that was silly. If X is countable and has a first countable topology, then the union of the bases at each of its points is the countable union of countable sets and is hence countable (and a basis for the topology). So, I'm lead to conclude that a countable, first countable space is necessarily second countable. Last edited:
2021-12-08 18:51:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9069373607635498, "perplexity": 254.27773823549285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363520.30/warc/CC-MAIN-20211208175210-20211208205210-00476.warc.gz"}
https://tex.stackexchange.com/questions/493138/arrow-with-vertical-bar-in-the-middle-tikz-cd
# Arrow with vertical bar in the middle (Tikz-cd) I need to have an arrow that has a vertical bar in the middle, something like this: ----|----> in commutative diagrams. Tikzcd does not have such an arrow supported directly, as far as I can tell. Any ideas on how to do this? Ideally I would like to still be able to have labels. I found a way to do this in the math environment using \mathclap and + as shown in code for arrow with a short vertical line in the middle of the shaft (not perfect but good enough) but this does not work well within a commutative diagram. tikz-cd supports markings. \documentclass{article} \usepackage{tikz-cd} \begin{document} \begin{tikzcd}[] a \arrow[r,"|" marking] & b \end{tikzcd} \end{document} If you want to have full control over all aspects of the bar, you can use a TikZy decoration. \documentclass{article} \usepackage{tikz-cd} \usetikzlibrary{decorations.markings} \tikzset{mid vert/.style={/utils/exec=\tikzset{every node/.append style={outer sep=0.8ex}}, postaction=decorate,decoration={markings, mark=at position 0.5 with {\draw[-] (0,#1) -- (0,-#1);}}}, mid vert/.default=0.75ex} \begin{document} \begin{tikzcd}[] a \arrow[r,mid vert,"pft"] & b \arrow[r,"pht"] & c \end{tikzcd} \end{document} In this version, the parameter is the length of the arrow, with the default being 2*0.75ex, but you can adjust the other parameters as well. • Great this works! Is it possible to add to the definition of mid vert the fact that all the labels should have outer sep of, say, 0.8ex? I am not familiar with decorations – geguze May 29 at 4:29 • @geguze Yes, it is. (I actually like the idea!) I modified the code accordingly. (And this change is only local, as shown in the example, i.e. the other arrows will have business as usual.) – user121799 May 29 at 4:40 • Wonderful! One last question. The definition of the arrow with a bar in running math, suggested at the link included in my question, does not look very good, especially when compared to a tikz-cd diagram. Any suggestion on how to make that better? One could put a tikz-cd inline based on the mid vert definition you provided, not sure that's the best way though. – geguze May 29 at 4:51 • @geguze How about $a\mathrel{\tikz[baseline=-0.5ex]{\draw[->,mid vert](0,0) --(1.2em,0);}}b\longrightarrow c$ with the above preamble? Or \newcommand{\tobar}{\mathrel{\tikz[baseline=-0.5ex]{\draw[->,mid vert](0,0) --(1.2em,0);}}} $a\tobar b\longrightarrow c$? There is a long discussion how to make this symbol adjust to the font size, see this thread. (For a simple vertical bar you could also use \rule ...) – user121799 May 29 at 4:57
2019-10-15 06:25:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7806340456008911, "perplexity": 1081.3031201621097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657586.16/warc/CC-MAIN-20191015055525-20191015083025-00539.warc.gz"}
https://notes.mikejarrett.ca/index-1.html
# Mobi station activity I finally got around to learning how to map data on to maps with Cartopy, so here's some quick maps of Mobi bikeshare station activity. First, an animation of station activity during a random summer day. The red-blue spectrum represents whether more bikes were taken or returned at a given station, and the brightness represents total station activity during each hour. I could take the time resolution lower than an hour, but I doubt the data is very meaningful at that level. There's actually less pattern to this than I expected. I thought that in the morning you'd see more bikes being taken from the west end and south False Creek and returned downtown, and vice versa in the afternoon. But I can't really make out that pattern visually. I've also pulled out total station activity during the time I've been collecting this data, June through October 2017. I've separated it by total bikes taken and total bikes returned. A couple things to note about these images: many of these stations were not active for the whole time period, and some stations have been moved around. I've made no effort to account for this; this is simply the raw usage at each location, so the downtown The similarity in these maps is striking. Checking the raw data, I'm seeing incredibly similar numbers of bikes being taken and returned at each station. This either means that on aggregate people use Mobis for two way trips much more often than I expected; one way trips are cancelling each other out; or Mobi is rebalancing the stations to a degree that any unevenness is being masked out*. I hope to look more into whether I can spot artificial station balancing from my data soon, but we may have to wait for official data from Mobi to get around this. *There's also the possibility that my data is bad, but let's ignore that for now Instead of just looking at activity, I tried to quantify whether there are different activity patterns at different stations. Like last week, I performed a primary component analysis (PCA) but with bike activity each hour in the columns, and each station as a row. I then plot the top two components which most explain the variance in the data. Like last week, much of the difference in station activity is explained by the total number of trips at each station, here represented on the X axis. There is a single main group of stations with a negative slope, but some outliers that are worth looking at. There are a few stations with higher Y values than expected. These 5 stations are all Stanley Park stations. There's another four stations that might be slight outliers. These are Anderson & 2nd (Granville Island); Aquatic Centre; Coal Harbour Community Centre; and  Davie & Beach. All seawall stations at major destinations. So all the outlier stations are stations that we wouldn't expect to show regular commuter patterns, but more tourist-style activity. I was hoping to see different clusters to represent residential area stations vs employment area stations, but these don't show up. Not terribly surprising since the Mobi stations cover an area of the city where there is fairly dense residential development almost everywhere. This fits with our maps of station activity, where we saw that there were no major difference between bikes taken and bikes returned at each station. All the source code used for data acquisition and analysis in this post is available on my github page. # Machine learning with Vancouver bike share data Six months ago I came across Jake VanderPlas' blog post examining Seattle bike commuting habits through bike trip data. I wanted to try to recreate it for Vancouver, but the city doesn't publish detailed bike trip data, just monthly numbers. For plan B, I looked into Mobi bike share data. But still no published data! Luckily, Mobi does publish an API with the number of bike at each station. It doesn't give trip information, but it's a start. All the code needed to recreate this post is available on my github page. ### Data Acquisition The first problem was to read the API and take a guess at station activity. To do this, I query the API every minute. Whenever the bike count at a station changes, this is counted as bikes being taken out or returned. I don't know exactly how often Mobi updates this API, but I'm certainly undercounting activity -- whenever two people return a bike and take a bike within a minute or so of each other I'll miss the activity. But it's good enough for my purposes, and I'm more interested in trends than total numbers anyway. I had two main problems querying the API: First, I'd starting by running the query script on my home machine. This meant that any computer downtime meant missing data. There's a few days missing while I updated my computer. Eventually I migrated to a google cloud server, so downtime is no longer an issue, but this introduced the second problem: time zones. I hadn't set a time zone for my new server, so all the times were recorded as UTC, while earlier data had been recorded in local Vancouver time. It took a long time of staring at the data wondering why it didn't make sense for me to realize what had happened, but luckily an easy fix in Pandas. ### Analysis Our long stretch of good weather this summer is visible in the data. Usage was pretty consistent over July and August, and began to fall off near the end of September when the weather turned. I'll be looking more into the relationship between weather and bike usage once I have more off-season data, but for now I'm more interested in zooming in and looking at daily usage patterns. Looking at a typical week in mid summer, we see weekdays showing a typical commuter pattern with morning and evening peaks and a midday lull. One thing that jumps out is the afternoon peak being consistently larger than the morning peak. With bike share, people have the option to take the bus to work in the morning and then pick up a bike afterwork if they're in the mood. Weekends lose that bimodal distribution and show a single normal distribution centered in the afternoon. On most weekend days and some weekdays, there is a shoulder or very minor peak visible in the late evening, presumably people heading home from a night out. Looking at the next week, Monday immediately jumps out as showing a weekend pattern instead of a weekday. That Monday, of course, is the Monday of the August long weekend. So, by eye we can fairly easily distinguish weekday and weekend travel patterns. How can we train a computer to do the same? First, I pivoted my data such that each row is a day, and each column is the hourly bike activity at each station (# columns = # stations * 24). I decided to keep the station information instead of summing across stations, but both give about the same result. This was provided as input to the primary component analysis (PCA) class of the Scikit-Learn Python package. PCA attempts to reduce the dimensionality of a data set (in our case, columns) while preserving the variance. For visualization, we can plot our data based on the the two components which most explain the variance in the data. Each point is a single day, colour labelled by total number of trips that day. PCA coloured by number of daily trips It's apparent that the first component (along the X axis) corresponds roughly (but not exactly) to total number of trips. But what does the Y axis represent? To investigate further, we label the data points by day of week. PCA coloured by day of week The pattern is immediately clear. Weekdays are clustered at the bottom of our plot, and weekends are clustered at the top. A few outliers jump out. There are 3 Mondays clustered in with the weekend group. These turn out to be the Canada Day, BC Day and Labour Day stat holidays. PCA with noteable Mondays labelled Finally, I wanted to try unsupervised clustering to see if weekday and weekend clusters are separated enough to be distinguished automatically. For this, I used the GaussianMixture class from Scikit-learn. Here, we try to automatically split our data into a given number of groups, in this case two. PCA and unsupervised clustering of June-September bike share usage Not quite. There is a group of low-volume weekend days in the top right cornerthat can't be automatically distinguished from weekdays. All these days are in June and September. Maybe with more non-summer data this will resolve itself. Out of curiosity, I re-ran the PCA and unsupervised clustering with only peak season data (July and August). Here, with more a more homogenous dataset, clustering works much better. In fact, only the first component (plotted along the X axis) is needed to distinguish between usage patterns. PCA and unsupervised clustering of July and August bike share usage Bike share usage will obviously decline during Vancouver's wet season, but I'm very interested to see how usage patterns will differ during the lower volume months. All the source code used for data acquisition and analysis in this post is available on my github page. # Datetime axis formatting with Pandas and matplotlib Panda's Dataframe.plot() function is handy, but sometimes I run up against edge cases and spend too much time trying to fix them. On one case recently, I wanted to overlay a line plot on top of a bar plot. Easy, right? Not when your dataframe has a datetime axis. The bar plot and and line plot functions format the x-axis date labels differently, and cause chaos when you try to use them on the same axes. None of the usual tick label formatting methods got me back anything useable. The solution was to take a step back and use the basic matplotlib functions to plot instead of the pandas wrappers. Calling ax.plot() and ax.bar() give sensible outputs where df.plot() didn't. See the below notebook for an example of the problem and solution. # Matplotlib on the web I've been learning a lot of Matplotlib recently at work and through a course I took on data visualization. I've especially had fun with making interactive plots, and I was curious about whether MPL code could be converted into HTML5 and/or javascript for presentation on the web. I'm a researcher, not a developer, so my main goal was to use the datavis library I already know without having to learn a bunch of javascript. In my googling, I've found a few solutions to this issue. None are perfect for what I was hoping to do (have Django and MPL work together to produce a nice interactive figure) but I did come across some interesting tools. #### The 'webagg' backend for matplotlib This was the solution I was expecting to work. When you specify the webagg backend after importing matplotlib, running f.show() starts up a mini web server and opens the plot in your default browser. As far as I can tell, the plot has the full functionality you'd get with any of the traditional backends. The magic of having full interactivity in the browser relies on the figure staying connected to the python kernel, which is non-trivial to set up on a live website. I haven't found any instructions that a non-expert can follow to set this up on a personal website. #### MPLD3 mpld3 is a python package that converts your matplotlib code into d3.js code. It's great! Instead of f.show(), you can run mpld3.save_html(f, 'test.html') and you have a html snippet that can be inserted into a webpage. The plot below is just vanilla matplotlib saved with mpld3.save_html. %CODE3% The plots look great and you get the default panning/zooming functions you expect in MPL. However widgets don't work so you don't get the full interactive experience. mpld3 does have some plugins, so for example you can get tooltips when hovering over plot elements. This is great, but requires some non-MPL code. Below I've added a hover tooltip which displays the label of a line plot when you hover over it. Very neat, but still not quite matplotlib's picker. %CODE4% The mpld3 website has a tutorial on writing your own plugins, and it looks like there's very little stopping you from recreated any feature you want as long as you know d3.js. I really appreciate all the work that's gone into mpld3, but it's still missing some pretty important features. For example, you might have noticed the years in the x-axis tick labels have a comma in them, which would be easily fixable in vanilla MPL with ax.xaxis.set_ticklabels(). But mpld3 doesn't (yet?) have support for setting ticklabels, so you're stuck with whatever shows up. A small price to pay for the ease of use, but still noticeable. #### Bokeh Bokeh is a python visualization library that targets web browsers. It looks like it's further along in development than mpld3, however it's not trivially convertible from matplotlib. There is a bokeh.mpl module, but it is no longer maintained and is considered deprecated. I haven't tried Bokeh myself yet, but if I was going make web visuals a major focus and wanted to stick with python, this is probably where I'd end up. #### plot.ly plot.ly seems to be the enterprise solution for interactive web plots in python (and other languages), and it funnels users towards hosting plots on its own server and has a subscription model. But they've released their code as open source, and it's possible to use the plotly library to convert your code to javascript to add to your own site. It's certainly more fully featured, but it's also substantially heavier than mpld3 -- for the same plot, the html snippet is 20,000 character for plot.ly but 2,000 characters for mpld3. To make the plot below, I took the same code as above and just added import plotly.plotly as py import plotly.tools as tls import plotly.offline as plo plotly_fig = tls.mpl_to_plotly( f ) plo.plot(plotly_fig, filename='housing_plotly') %CODE5% The tooltips happen without any extra code. I haven't spend much time learning the ins and outs of plotly so I'm not sure how customizable it is. I like that it works out of the box and looks professional, but the default tooltips and toolbar do feel a little busy. Hard to argue with free and easy though! For me, the likely answer seems to be that I'll use mpld3 when I want simple and clean web visualizations, and I'll use plot.ly when I want to have a bit more interactivity. If I end up spending more of my time doing this I'll just learn Bokeh instead, but I'd rather not learn a new library if I don't have to. And I'll keep my fingers crossed that smarter people than me make it easy to host fully interactive MPL plots which stay connected to the python kernel on any django website. Update: Jake VanderPlas gave a great talk at Pycon that covers all this that I wish I'd found while researching all this.
2021-06-24 18:18:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.267996221780777, "perplexity": 1578.0275125619255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488556482.89/warc/CC-MAIN-20210624171713-20210624201713-00417.warc.gz"}
http://gmatclub.com/forum/yale-som-2013-calling-all-applicants-135398-700.html?kudos=1
Find all School-related info fast with the new School-Specific MBA Forum It is currently 26 Jun 2016, 09:27 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Yale SOM 2013 - Calling All Applicants Author Message Manager Joined: 17 Oct 2011 Posts: 202 Location: United States (NY) Concentration: Social Entrepreneurship, Organizational Behavior GMAT 1: 660 Q44 V38 GMAT 2: 720 Q47 V41 GPA: 3.62 WE: Consulting (Consulting) Followers: 9 Kudos [?]: 64 [0], given: 81 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 18 Jan 2013, 17:00 SOMBound wrote: Thanks for the info all. OhDenny are there many coffee shops/restaurants in the East Rock area near campus or is it primarily residential? Within 5 blocks I have three really nice deli/cafes, two Asian groceries, a Dunkin' Donuts, a steak house, two pizza place (one of them famous), two really nice Italian restaurants, a Chinese restaurant, a pho place, a cupcake place, a liquor store, an Irish pub, and the best damn grilled cheese place I've ever been to. But Downtown has much more, and much closer, partly because the blocks are shorter. VP Status: Yale SOM! Joined: 06 Feb 2012 Posts: 1377 Location: United States Concentration: Marketing, Strategy Followers: 48 Kudos [?]: 506 [0], given: 327 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 18 Jan 2013, 17:57 Expert's post OhDenny wrote: SOMBound wrote: Thanks for the info all. OhDenny are there many coffee shops/restaurants in the East Rock area near campus or is it primarily residential? Within 5 blocks I have three really nice deli/cafes, two Asian groceries, a Dunkin' Donuts, a steak house, two pizza place (one of them famous), two really nice Italian restaurants, a Chinese restaurant, a pho place, a cupcake place, a liquor store, an Irish pub, and the best damn grilled cheese place I've ever been to. But Downtown has much more, and much closer, partly because the blocks are shorter. There's a grilled cheese place?! I think I'm sold.... _________________ aerien Note: I do not complete individual profile reviews. Please use the Admissions Consultant or Peer Review forums to get feedback on your profile. GMAT Club Premium Membership - big benefits and savings Intern Joined: 12 Dec 2012 Posts: 18 Location: United States Concentration: Strategy, Economics GMAT 1: 730 Q V GPA: 3.9 Followers: 3 Kudos [?]: 10 [0], given: 2 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 18 Jan 2013, 21:29 Not to sound panicky, but have most R2 interview invites already gone out. When can we expect to hear one way or another if we are going to get an interview invite? Intern Joined: 29 Dec 2012 Posts: 47 Concentration: Finance Followers: 2 Kudos [?]: 12 [0], given: 11 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 19 Jan 2013, 00:13 skm wrote: GPT55 wrote: MBA2ER wrote: Congrats everyone, I got an invite too~! This is my first one and the Puyallup's question came to my mind too. When do admissions officers start giving out offers? I may be reading too much into it but of all the schools that I applied to, only Booth explicitly stated that all decisions will be released at once on a particular date/time. Most other schools (incl. Yale) said decisions will be released by a certain date, which technically gives them plenty leeway to extend offers maybe even with scholarships to people early (by weeks, maybe even a month). Do schools actually do that, and to what extent? The difference in approach you point out between so-called "rolling admission" schools (Yale, Stern etc.) and traditional "decision-date" schools is interesting, but I would not read too much into it. I can not force myself to believe they would disadvantage anyone within a round based on when he or she interviews. My only guess is that they'll extend admission offers earlier to people who really stand out of the pack and who they definitely want on board. As far as I know everyone received R1 admissions offers on the same day this year, which was the date posted on the website (12/13) and also listed as the date we would hear 'by'. Someone checked with the admissions office about this - 'by' meant 'on', at least for R1. I interviewed the same week as machichi, although a few days earlier, and I know at least a few of the people who were there were offered admission too. You might want to read through our many pages of anxious postings earlier in this thread as the R1 applicants had this same discussion. The only people who received calls earlier than 12/13 were international applicants, and it was already the 13th of December in their respective time zones when they got the calls. Thanks skm/GPT55, this helps clear things up a lot. PanchoPippin wrote: Not to sound panicky, but have most R2 interview invites already gone out. When can we expect to hear one way or another if we are going to get an interview invite? PanchoPippin, I think someone mentioned in this thread that invites are sent out randomly over a period of time. Considering the application cycle just closed less than two weeks ago, I really can't imagine that they have already made decisions on most of the applications! Having said that, the latest available time slot I saw in the system was 22nd Feb, which I thought was remarkably early given that most schools interview into March. Maybe they'll open up more slots as they send out more invites? Not really sure, the R1 guys or current students can probably advise better. Anyway, hang in there and good luck! Manager Joined: 25 Oct 2012 Posts: 111 Location: United States Concentration: Healthcare GMAT 1: 710 Q48 V38 GPA: 3.81 WE: Medicine and Health (Health Care) Followers: 1 Kudos [?]: 27 [0], given: 12 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 19 Jan 2013, 18:47 aerien wrote: There's a grilled cheese place?! I think I'm sold.... It's fantastic, as advertised, too. I go there everytime I'm in New Haven. Intern Joined: 21 Oct 2012 Posts: 15 GPA: 3.78 Followers: 0 Kudos [?]: 4 [0], given: 0 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 21 Jan 2013, 14:38 OhDenny wrote: SOMBound wrote: Anybody attending Winter Welcome, and if so are you being hosted for Sunday night? Starting to look at apartment locations. Any current SOMers that have opinions about East Rock vs Downtown? What are some popular apartment complexes/areas to live? Hey there, and congrats! As others have said, it depends what you're looking for, but you may also be surprised when you see it with your own eyes. I came from Brooklyn, so all prices were immediately so much more reasonable than I was used to, but they are not cheap by the standards of most of the rest of the country. I'd say 80% falls between $700 and$950 per person per month. Some Downtown apartment complexes, (like 360 State and the Eli) are on the upper bound, and can cost significantly more than that, so just keep that in mind. I LOVE my house in East Rock. I'll have a 3 minute walk to class when the new building opens, (it's 7 mins now), and I'm on the Southern end of East Rock, so I can quite easily walk to GPSCY, Shake Shack, 360 State, and other parts of the Yale Campus. Those that live a bit higher in East Rock typically have bikes so they can get to class no problem. That said - although East Rock seems to be a bit sleepier since it's filled with converted old colonial mansions, we have held a significant amount of parties, game nights, barbecues, and so on in East Rock, including 4 or 5 progressive parties. It's also probably the "safest" neighborhood in New Haven, though I put that in quotes because most students live in pretty safe neighborhoods - I've yet to feel unsafe walking around the Yale Campus surrounding neighborhoods. Interesting fact: 360 State was designed by an SOM and YArch alum, and we studied it for our very first case during orientation. It's a really cool building, akin to some of the more corporate condo buildings in the Financial District of Manhattan. Welcome Weekend is typically a better time to check out potential apartments - I both met my roommates and put down my deposit that weekend. A lot of the August 1st rentals won't even come onto the market until May/June anyway, so if you're going to be checking places out, I'd suggest looking more at neighborhoods, and housing attributes that you're looking for. Do a lot of admits attend both the Winter Welcome and the regular Welcome Weekend? Since I'm not sure I'll be able to venture up again I was going to stay Monday (unhosted, but Sunday hosted) and then do a preliminary gut-check of downtown apt buildings on Tues even though it won't quite be time to sign a lease! Other than 360/Eli, are there any other building where a bunch of SOMer live? Do apartments buildings tend to fill up super quick? Manager Joined: 17 Oct 2011 Posts: 202 Location: United States (NY) Concentration: Social Entrepreneurship, Organizational Behavior GMAT 1: 660 Q44 V38 GMAT 2: 720 Q47 V41 GPA: 3.62 WE: Consulting (Consulting) Followers: 9 Kudos [?]: 64 [0], given: 81 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 21 Jan 2013, 18:49 mba2527 wrote: Do a lot of admits attend both the Winter Welcome and the regular Welcome Weekend? Since I'm not sure I'll be able to venture up again I was going to stay Monday (unhosted, but Sunday hosted) and then do a preliminary gut-check of downtown apt buildings on Tues even though it won't quite be time to sign a lease! Other than 360/Eli, are there any other building where a bunch of SOMer live? Do apartments buildings tend to fill up super quick? I came to both last year, though I'm also a giant nerd, so take that with a grain of salt. I found that last year, Winter Welcome was more academic-focused, and Welcome Weekend was more total-package focused, though I'm in charge of planning both this year, so I'm trying to make the experiences more similar. One of the big draws for me was that I lived close enough to come to both (New York), and that a ton of the people I met at both weekends were down with meeting up throughout the rest of the year and in different cities, so by the time actual move-in and orientation rolled around, I already knew several dozen classmates. Welcome Weekend has an advantage to this as well, since there are far more people in attendance, (150-175 ish vs. 70). Don't get stressed out about signing leases in February. Even April was pretty early to be hunting, and a lot of folks found their places over the summer. There are a couple of other buildings where multiple SOMers live, but I can't think of them off of the top of my head. Perhaps one of my other current student compadres can help out with that? I'd say the bulk of our class, however, lives in old colonials that have been converted to multi-units. Intern Joined: 15 Jan 2013 Posts: 15 Concentration: Strategy, Accounting Schools: Marshall '15 (A) Followers: 1 Kudos [?]: 5 [0], given: 0 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 22 Jan 2013, 08:32 wondering if any interview invites come out today as first day back from holiday of this week... Manager Joined: 04 Mar 2011 Posts: 129 Concentration: Finance, Economics GMAT 1: 700 Q44 V41 GMAT 2: 720 Q47 V41 WE: Law (Non-Profit and Government) Followers: 4 Kudos [?]: 39 [0], given: 38 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 22 Jan 2013, 10:59 andreak wrote: Just had my interview over Skype with a 2nd year student. It went really well! He was very easy to speak to, and the 30 minutes zoomed by as the interview turned into an informative, friendly conversation about what sets Yale apart. He asked very standard questions: - Talk about a time you received push-back when working on a project - Talk about a time there was a set-back during a project you were working on and things fell behind. (He joked about how b-school applicants tend to be type As who don't like this question, but as a current Peace Corps volunteer, I had plenty of examples to draw from.) - Why Yale? - How do you want to get involved on campus? Props to Yale for not throwing out obnoxious curve ball questions! VP Status: Yale SOM! Joined: 06 Feb 2012 Posts: 1377 Location: United States Concentration: Marketing, Strategy Followers: 48 Kudos [?]: 506 [0], given: 327 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 22 Jan 2013, 20:26 Expert's post OhDenny wrote: I came to both last year, though I'm also a giant nerd, so take that with a grain of salt. I found that last year, Winter Welcome was more academic-focused, and Welcome Weekend was more total-package focused, though I'm in charge of planning both this year, so I'm trying to make the experiences more similar. @OhDenny: Not sure if you had any involvement in the date pick for Winter Welcome, but I did want to thank whoever decided to have it over President's Day weekend. That Monday holiday makes travel plans a bit easier to coordinate, and from a work perspective, that's one less day to take off (or one extra vacation day to enjoy)!! _________________ aerien Note: I do not complete individual profile reviews. Please use the Admissions Consultant or Peer Review forums to get feedback on your profile. GMAT Club Premium Membership - big benefits and savings Intern Joined: 21 Oct 2012 Posts: 15 GPA: 3.78 Followers: 0 Kudos [?]: 4 [0], given: 0 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 22 Jan 2013, 20:32 aerien wrote: OhDenny wrote: I came to both last year, though I'm also a giant nerd, so take that with a grain of salt. I found that last year, Winter Welcome was more academic-focused, and Welcome Weekend was more total-package focused, though I'm in charge of planning both this year, so I'm trying to make the experiences more similar. @OhDenny: Not sure if you had any involvement in the date pick for Winter Welcome, but I did want to thank whoever decided to have it over President's Day weekend. That Monday holiday makes travel plans a bit easier to coordinate, and from a work perspective, that's one less day to take off (or one extra vacation day to enjoy)!! haha... Totally didn't put 2+2 together! I haven't had President's Day (let alone MLK Day) off since maybe my grammar school days! Hopefully that means a good group will be able to head to New Haven that weekend. Manager Joined: 17 Oct 2011 Posts: 202 Location: United States (NY) Concentration: Social Entrepreneurship, Organizational Behavior GMAT 1: 660 Q44 V38 GMAT 2: 720 Q47 V41 GPA: 3.62 WE: Consulting (Consulting) Followers: 9 Kudos [?]: 64 [0], given: 81 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 22 Jan 2013, 21:46 aerien wrote: @OhDenny: Not sure if you had any involvement in the date pick for Winter Welcome, but I did want to thank whoever decided to have it over President's Day weekend. That Monday holiday makes travel plans a bit easier to coordinate, and from a work perspective, that's one less day to take off (or one extra vacation day to enjoy)!! I think you may be overestimating my power, but I am also glad that its over Presidents' Day. When I said 'in charge' I meant of student-facing activities. Bruce and co are really the ones spearheading the whole thing, and they do a stellar job. Posted from my mobile device VP Status: Yale SOM! Joined: 06 Feb 2012 Posts: 1377 Location: United States Concentration: Marketing, Strategy Followers: 48 Kudos [?]: 506 [0], given: 327 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 23 Jan 2013, 01:06 Expert's post OhDenny wrote: aerien wrote: @OhDenny: Not sure if you had any involvement in the date pick for Winter Welcome, but I did want to thank whoever decided to have it over President's Day weekend. That Monday holiday makes travel plans a bit easier to coordinate, and from a work perspective, that's one less day to take off (or one extra vacation day to enjoy)!! I think you may be overestimating my power, but I am also glad that its over Presidents' Day. When I said 'in charge' I meant of student-facing activities. Bruce and co are really the ones spearheading the whole thing, and they do a stellar job. Posted from my mobile device Well I appreciate it nonetheless! _________________ aerien Note: I do not complete individual profile reviews. Please use the Admissions Consultant or Peer Review forums to get feedback on your profile. GMAT Club Premium Membership - big benefits and savings Intern Joined: 23 Jan 2013 Posts: 15 Concentration: Entrepreneurship, General Management GMAT 1: 720 Q49 V39 GPA: 3.8 WE: General Management (Transportation) Followers: 0 Kudos [?]: 1 [0], given: 0 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 23 Jan 2013, 11:05 Hi guys, not sure if any insiders could help me out - I submitted my application on round 2 deadline day (Jan 8), but the status is still showing 'submitted' online. Does this mean that my application has not been reviewed yet? Or I'll probably be rejected (not received any email though)? Thanks! VP Status: Yale SOM! Joined: 06 Feb 2012 Posts: 1377 Location: United States Concentration: Marketing, Strategy Followers: 48 Kudos [?]: 506 [0], given: 327 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 23 Jan 2013, 21:07 Expert's post haiwen wrote: Hi guys, not sure if any insiders could help me out - I submitted my application on round 2 deadline day (Jan 8), but the status is still showing 'submitted' online. Does this mean that my application has not been reviewed yet? Or I'll probably be rejected (not received any email though)? Thanks! I've been admitted and my status still says submitted. If you get an interview invite, you'll receive an email so there's no need to constantly check your status. Posted from my mobile device _________________ aerien Note: I do not complete individual profile reviews. Please use the Admissions Consultant or Peer Review forums to get feedback on your profile. GMAT Club Premium Membership - big benefits and savings Intern Joined: 18 Apr 2011 Posts: 15 Followers: 0 Kudos [?]: 5 [0], given: 0 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 24 Jan 2013, 09:13 Hi all Did any international applicants get an invite at Yale for the long weekend? Any interview invites to international applicants? I am from Canada applied in R2. Manager Joined: 04 Mar 2011 Posts: 129 Concentration: Finance, Economics GMAT 1: 700 Q44 V41 GMAT 2: 720 Q47 V41 WE: Law (Non-Profit and Government) Followers: 4 Kudos [?]: 39 [0], given: 38 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 24 Jan 2013, 11:26 GreenMan wrote: Hi all Did any international applicants get an invite at Yale for the long weekend? Any interview invites to international applicants? I am from Canada applied in R2. I'm from Europe and got an interview invite last week. I'll have my interview on Skype though as unfortunately I cannot travel to the US. Manager Joined: 15 Nov 2009 Posts: 123 Concentration: Finance GMAT 1: 680 Q48 V35 GMAT 2: 700 Q48 V37 Followers: 1 Kudos [?]: 48 [0], given: 15 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 25 Jan 2013, 08:57 aerien wrote: I've been admitted and my status still says submitted. If you get an interview invite, you'll receive an email so there's no need to constantly check your status. Posted from my mobile device Looks like b-schools like you Which will you choose? VP Status: Yale SOM! Joined: 06 Feb 2012 Posts: 1377 Location: United States Concentration: Marketing, Strategy Followers: 48 Kudos [?]: 506 [0], given: 327 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 25 Jan 2013, 09:34 Expert's post rainfall wrote: aerien wrote: I've been admitted and my status still says submitted. If you get an interview invite, you'll receive an email so there's no need to constantly check your status. Posted from my mobile device Looks like b-schools like you Which will you choose? Still deciding! I should make a decision soon though Posted from my mobile device _________________ aerien Note: I do not complete individual profile reviews. Please use the Admissions Consultant or Peer Review forums to get feedback on your profile. GMAT Club Premium Membership - big benefits and savings Intern Joined: 25 Sep 2012 Posts: 31 GMAT 1: 660 Q39 V41 GMAT 2: 730 Q45 V44 Followers: 0 Kudos [?]: 8 [0], given: 11 Re: Yale SOM 2013 - Calling All Applicants [#permalink] ### Show Tags 25 Jan 2013, 10:38 Hey all, just joining the conversation. Was invited to interview and I just scheduled it on Feb 13th in the AM. Anyone else joining me?! _________________ GMAT Official 3: Q45, V44, 730 570-to-660-to-730-my-gmat-journey-144703.html The GMAT is beatable! Perseverance is the name of the game! Re: Yale SOM 2013 - Calling All Applicants   [#permalink] 25 Jan 2013, 10:38 Go to page   Previous    1  ...  33   34   35   36   37   38   39  ...  65    Next  [ 1288 posts ] Similar topics Replies Last post Similar Topics: 1 Calling all Yale SOM EMBA Applicants (2015 Intake) Class of 2017!! 3 24 Oct 2014, 23:36 12 Calling All Yale SOM 2014 Waitlisted Applicants 62 11 Dec 2013, 15:27 14 Calling All Yale SOM 2013 'Waitlisted' Applicants 159 13 Dec 2012, 17:57 117 Yale SOM 2012 - Calling All Applicants (Class of 2014) 831 16 Aug 2011, 11:26 268 Calling All Fall 2011 Yale SOM Applicants 2001 10 Jun 2010, 11:30 Display posts from previous: Sort by
2016-06-26 16:27:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22580939531326294, "perplexity": 8067.954849874384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00065-ip-10-164-35-72.ec2.internal.warc.gz"}
https://en.wikisource.org/wiki/Page:Newton's_Principia_(1846).djvu/290
# Page:Newton's Principia (1846).djvu/290 284 [Book II. the mathematical principles II of this Book) the moment KL of AK will be equal to ${\displaystyle \scriptstyle {\frac {2APQ+2BA\times PQ}{Z}}}$ or ${\displaystyle \scriptstyle {\frac {2BPQ}{Z}}}$, and the moment KLON of the area AbNK will be equal to ${\displaystyle \scriptstyle {\frac {2BPQ\times LO}{Z}}}$ or ${\displaystyle \scriptstyle {\frac {BPQ\times BD^{3}}{2Z\times CK\times AB}}}$. Case 1. Now if the body ascends, and the gravity be as AB² + BD², BET being a circle, the line AC, which is proportional to the gravity, will be ${\displaystyle \scriptstyle {\frac {AB^{2}+BD^{2}}{Z}}}$, and DP² or AP² + 2BAP + AB² + BD² will be AK ${\displaystyle \scriptstyle \times }$ Z + AC ${\displaystyle \scriptstyle \times }$ Z or CK ${\displaystyle \scriptstyle \times }$ Z; and therefore the area DTV will be to the area DPQ as DT² or DB² to CK ${\displaystyle \scriptstyle \times }$ Z. Case 2. If the body ascends, and the gravity be as AB² - BD², the line AC will be ${\displaystyle \scriptstyle {\frac {AB^{2}+BD^{2}}{Z}}}$, and DT² will be to DP² as DF² or DB² to BP² - BD² or AP² + 2BAP + AB² - BD², that is, to AK ${\displaystyle \scriptstyle \times }$ Z + AC ${\displaystyle \scriptstyle \times }$ Z or CK ${\displaystyle \scriptstyle \times }$ Z. And therefore the area DTV will be to the area DPQ as DB² to CK ${\displaystyle \scriptstyle \times }$ Z. Case 3. And by the same reasoning, if the body descends, and therefore the gravity is as BD² - AB², and the line AC becomes equal to ${\displaystyle \scriptstyle {\frac {BD^{2}-AB^{2}}{Z}}}$; the area DTV will be to the area DPQ, as DB² to CK ${\displaystyle \scriptstyle \times }$ Z: as above. Since, therefore, these areas are always in this ratio, if for the area
2017-04-24 15:19:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 16, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8974064588546753, "perplexity": 3963.3534679873123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119637.34/warc/CC-MAIN-20170423031159-00396-ip-10-145-167-34.ec2.internal.warc.gz"}
http://www.seobythesea.com/2006/09/solving-different-urls-with-similar-text-dust/
Solving Different URLs with Similar Text (DUST) Different URLs, Similar Pages There are sites where (substantially) the same page may be found under different Unform Resource Locators (URLs) or addresses. For example: • http://www.nytimes.com = http://nytimes.com When this happens, there can be some negative results from the perspectives of both search engines and site owners, such as: • Search engines have to spend time trying to visit each version of the page • Search engines may treat each page as different but duplicate pages. It’s recommended that this type of duplication of pages under different addresses be avoided, if at all possible. Site owners can try to reduce or limit the possibility that these different URLs with the same (or very similar) content appears on their sites. What might search engines do to limit or stop this kind of problem? A Possible Solution for Search Engines? My examples are simple ones, but there are more complex situations where multiple addresses may exist for the same page. An algorithm to help search engines understand when the same (or a very similar) page is being exhibited under different URLs was the focus of a poster presented at the WWW2006 Conference this past May. The extended abstract of that poster, Do not Crawl in the DUST: Different URLs with Similar Text, looks at some of the more complex versions, and describes an algorithm that might help search engines recognize those pages before visiting them, so that only one is crawled and possibly indexed. The authors are Uri Schonfeld, Ziv Bar-Yossef, and Idit Keidar. (Note: Ziv Bar-Yossef joined Google last month.) Here’s a snippet from the introductory paragraphs to that document: Many web sites define links, redirections, or aliases, such as allowing the tilde symbol (“~”) to replace a string like “/people”, or “/users”. Some sites allow different conventions for file extensions- “.htm” and “.html”; others allow for multiple default index file names – “index.html” and “tex2html12″. A single web server often has multiple DNS names, and any can be typed in the URL. As the above examples illustrate, DUST is typically not random, but rather stems from some general rules, which we call DUST rules, such as “~” $\rightarrow$ “/people”, or “/default.html” at the end of the URL can be omitted. Moreover, DUST rules are typically not universal. Many are artifacts of a particular web server implementation. For example, URLs of dynamically generated pages often include parameters; which parameters impact the page’s content is up to the software that generates the pages. Some sites use their own conventions; for example, a forum site we studied allows accessing story number “num” on its site both via the URL “http://domain/story?id=num” and via “http://domain/story_num”. In this paper, we focus on detecting DUST rules within a given web site. We are not aware of any previous work tackling this problem. Other pages that might be determined to be similar are ones where the main content is available at one URL, and the same content with some additional information (such as blog comments) can be seen at another URL. Identifying DUST The poster notes that search engines do attempt to identify DUST with some simple and some complex approaches, for example: 1. “http://” may be added to links found during crawling, where it is missing. 2. Trailing slashes used in links (http://www.example.com/) may be removed. 3. Hash-based summaries of page content (shingles) may be compared after pages are fetched. What the paper introduces is an algorithm, that the authors refer to as DustBuster, which looks individual sites, and tries to see if there are rules being followed on the site where similar content is being shown under different URLs. For example, in the site where “story?id=” can be replaced by “story_”, we are likely to see in the URL list many different pairs of URLs that differ only in this substring; we say that such a pair of URLs is an instance of “story?id=” and “story_”. The set of all instances of a rule is called the rule’s support. Our first attempt to uncover DUST is therefore to seek rules that have large support. It also tries to understand possible exceptions to those rules. The poster defines those in more detail, and it’s worth trying to understand the examples, exceptions, and approaches that they use. Letting the Search Engine Decide Which URL is Good There’s one problem that I have with the approach, and that is that the algorithm decides which pages to index and keep, and which to avoid – and not fetch for indexing. This could be a problem, for instance, for a news story page which is available at different URLs, with one displaying comments and the other not showing them. Or a product page, which might be shown twice – once with, and once without user reviews. Or a set of dynamic pages where some small portion of the page changes in response to which link is clicked upon. But those pages might have difficulties being indexed anyway, or filtered during the serving of a page, if a shingling approach is used, and determines that they are the same or substantially similar pages. Either way, if an algorithm like DustBuster were used, or another approach, it’s still the search engine deciding which of the similar pages it might include in its index, and which it wouldn’t. If you can avoid DUST, it’s not a bad idea to try. Author: Bill Slawski
2015-11-28 04:01:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35617974400520325, "perplexity": 1814.0125945015836}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450762.4/warc/CC-MAIN-20151124205410-00349-ip-10-71-132-137.ec2.internal.warc.gz"}
https://serpent.vtt.fi/mediawiki/index.php?title=Automated_burnup_sequence&diff=1575&oldid=1574
# Difference between revisions of "Automated burnup sequence" Homogenized group constants form the input data for core-level fuel cycle and transient simulator calculations. The data is parametrized according to discrete state-points, which are defined by the local thermal hydraulic conditions together with reactivity control. The process of group constant generation must cover the full range of operating states within the reactor core, which often requires repeating the assembly-level calculation thousands of times. Since the local operating conditions inside the fuel assembly also affect how the materials are depleted, the state-points by which the data is parametrized are not completely independent. The calculations are instead divided into: 1. Branch variations, taking into account momentary changes in the operating conditions, such as fuel temperature, moderator density and temperature, boron concentration and insertion of control rods 2. History variations, taking into account conditions that persist for an extended period of time, such as moderator temperature and density, boron concentration and positioning of control rods In practice, the procedure involves burnup calculations covering all assembly types and history variations. Branch variations are accounted for by performing restart calculations for each burnup point, varying the local operating conditions accordingly. Serpent provides an automated burnup sequence, capable of performing branch variations for a single history run. The procedure works by first running a burnup calculation, after which a number of restarts are performed for selected burnup points. For each restart the code invokes a number of user-defined variations in the input, corresponding to the branches to different state-points. The branches are defined using the branch card, and the combination of variations by the coefficient matrix. The group constant data is collected into a separate output file easily read by post-processing scripts. The input and output formats are described below with examples. For the moment the automated burnup sequence is limited to performing branch variations, and variations in histories must be handled by writing separate input files. For complete examples, see the collection of example input files. ## Branch types Each branch card defines one or several variations applied to the basic input. The currently supported variations can be used for: 1. Replacing one material with another (repm 2. Replacing one universe with another (repu) 3. Applying a universe, surface or fill transformation (tra) 4. Changing the density and temperature of a material (stp) ### Replacing materials The typical use for the material replacement variation is to change the boron concentration in the coolant. Assume, for example, that there are three coolant compositions defined in the input: • cool_0B without boron • cool_750B with 750 ppm boron • cool_1500B with 1500 ppm boron as: % --- Coolant without boron: mat cool_0B -0.77028 moder lwtr 1001 1001.06c 3.33330E-01 8016.06c 6.66670E-01 % --- Coolant with 750 ppm boron: mat cool_750B -0.77028 moder lwtr 1001 1001.06c 3.33076E-01 8016.06c 6.66161E-01 5010.06c 1.51847E-04 5011.06c 6.11204E-04 % --- Coolant with 1500 ppm boron: mat cool_1500B -0.77028 moder lwtr 1001 1001.06c 3.32821E-01 8016.06c 6.65653E-01 5010.06c 3.03690E-04 5011.06c 1.22239E-03 % --- Thermal scattering data: therm lwtr lwj3.11t If the material used to fill the coolant channel in the original geometry is cool_750B, then this is the material used throughout the burnup calculation. Once the depletion history is completed, the code performs the restarts and invokes the branches. The boron concentration can be varied to 0 and 1500 ppm with branches: % --- Branch to low boron concentration: branch Blo repm cool_750B cool_0B % --- Branch to high boron concentration: branch Bhi repm cool_750B cool_1500B which simply replace material cool_750B with materials cool_0B and cool_1500B, respectively. Names Blo and Bhi are used to identify the branches in the coefficient matrix. The material replacement variation also works with mixtures, which may be a convenient way to define coolant compositions using water and boron. For, example, cool_750B above could be defined as: % --- Water: mat water -0.76973 moder lwtr 1001 1001.06c 0.33333 8016.06c 0.66667 therm lwtr lwj3.11t % --- Natural boron: mat boron 1.00000 5010.06c 0.19900 5011.06c 0.80100 % --- Coolant with 750 ppm boron: mix cool_750B water -0.99925 boron -750E-6 The replacing materials or mixtures cannot be used as part of the original geometry. ### Replacing universes Universe replacement variations are useful for invoking changes to entire sub-regions of the geometry, for example, for the purpose of control rod insertion in 2D geometries. An empty guide tube and a tube with control rod inserted could be defined as: % --- Empty guide tube: pin GTU cool 0.56134 zirc4 0.60198 cool % --- Rodded guide tube: pin ROD AIC 0.43310 steel 0.48387 cool 0.56134 zirc4 0.60198 cool where cool, zirc4, AIC, and steel are the materials for coolant, guide tube, absorber and control rod tube, respectively. Insertion of control rods can then be invoked with branch: % --- Insertion of control rods: branch CR repu GTU ROD which simply replaces the empty guide tube with a rodded tube. ### Applying transformations When spatial homogenization is performed for a 2D geometry, the insertion of control rods can be accomplished by simply replacing one universe with another. In 3D geometries, however, the rods can be fully withdrawn, fully inserted or anything in between, and the most convenient way to move the rods is to apply a coordinate transformation that sets the position. Different transformations can be invoked at different branches, to account for the various positions. The applied transformations are defined separately in the input. For example, if a homogenized 3D node has the height of 19.25 cm, transformations corresponding to fully inserted, half-way and fully with withdrawn control rod positions could be defined as: utrans pos0 0.0 0.0 0.000 utrans pos1 0.0 0.0 9.625 utrans pos2 0.0 0.0 19.250 assuming that without any transformation the bottom of the absorber sits at the bottom of the node. Without branch calculation the first entries in the utrans cards would be interpreted as universe names, and if universes "pos0", "pos1" and "pos2" are not defined, Serpent would simply ignore these entries. Assuming that the universe defining the control rods is called "ROD", the branches invoking the movement could be defined as: % --- Fully inserted: branch CR0 tra ROD pos0 % --- half-way: branch CR1 tra ROD pos1 % --- Fully withdrawn: branch CR2 tra ROD pos2 The transformation branch works by simply replacing the universe name in the transformation card with the one given in the branch card. For example, when branch "CR1" is invoked, the universe transformation applied for the geometry would become: utrans ROD 0.0 0.0 9.625 which would lift the rods in the half-way position. Because of the way the variation is implemented, it is important that the names of the invoked transformations do not correspond to any actual universe in the geometry. If this is the case, Serpent will apply the transformations accordingly already during the burnup calculation. The transformation branch can also be applied to surface and fill transformations. In such case the target in the branch card refers to a surface and cell, respectively. If multiple transformation types are used in the calculation, it is important to pay attention to how the associated universes, surfaces and cells are named. When the branch is invoked, Serpent identifies the transformation based on the name, without checking the type. So a universe transformation applied to universe 1 and surface transformation applied to surface 1, for example, become indistinguishable. Transformations are not limited translations, but also rotations and any operations defined by transformation matrix can be applied. ### Applying state variations State variations can be used to change material temperatures and densities, for example, for the purpose of fuel or moderator temperature branches. When the material is not associated with thermal scattering data, the input for the branch card consists of the new density and temperature. This is typical for fuel materials, for example, branches: % --- Branch to low fuel temperature: branch Flo stp fuel -10.338 600 % --- Branch to high fuel temperature: branch Fhi stp fuel -10.338 1200 change the temperature of material "fuel" to 600 and 1200 Kelvin, respectively. The routine works by looking for the cross section data for the nearest temperature below the given value and applying the built-in Doppler-broadening preprocessor routine to adjust the cross sections to the correct temperature. Changes in materials with associated thermal scattering data require three additional input parameters for each library: • The name of therm card associated with the material • Name of a data library at temperature below or equal to the given value • Name of a data library at temperature above or equal to the given value For example, if the coolant is defined as: % --- BWR coolant with 40% void fraction: mat cool -0.443760 moder lwtr 1001 1001.06c 0.66667 8016.06c 0.33333 % --- Thermal scattering data for light water at 600K therm lwtr lwe7.12t a branch to a lower temperature could be defined as: branch Clo stp cool -0.739605 560 lwtr lwe7.10t lwe7.12t This increases the density from 0.443760 to 0.739605 g/cm3 and lowers the temperature from 600 to 550 Kelvin. The change in temperature for free-atom cross sections is handled by replacing the 600K library data with 300K data and broadening the cross sections to 560K. The change in thermal scattering data is handled using tabular interpolation. The provided library names lwe7.10t and lwe7.12t correspond to 550 and 600K temperatures, respectively. If the material is associated with multiple thermal scattering libraries, they must all be included in the branch card. ### Multiple simultaneous variations The number of variations per branch card is not limited. If the geometry consists of more than one fuel type, their temperatures can be varied with a single branch: branch Fhi stp fuel1 -10.338 1200 stp fuel2 -10.338 1200 stp fuel3 -10.338 1200 stp fuel4 -10.338 1200 stp fuel5 -10.338 1200 Some core simulator codes apply polynomial interpolation between state points, which often does not require the full coefficient matrix with all possible state-point combinations. Instead of a multi-dimensional matrix it is then more convenient to define a one-dimensional list of branches, in which the necessary combinations are defined. For example, the calculation of a cross term involving variation in both coolant boron concentration and fuel temperature could be accomplished with branch: branch BRA01 repm cool_750B cool_0B stp fuel -10.338 1200 State variations alone cannot be used to handle heat expansion, but the change in density can be accompanied by a universe replacement variation to account for the change in dimensions: branch Fh stp fuel1 -10.1348 1200 repu FUE FUEH assuming that universes FUE and FUEH describe the fuel pin at nominal and expanded dimensions, respectively. ## Setting up the coefficient matrix The coef card lists the burnup points at which the restarts are performed. These points must correspond to points in the depletion history (0 for initial composition). The burnup points are followed by the coefficient calculation matrix, which may be one or multi-dimensional. For example: coef 11 0 0.1 1 3 5 10 15 20 30 40 50 3 nom Blo Bhi 3 nom Clo Chi 3 nom Flo Fhi 2 nom CR invokes the branch calculation at 11 burnup points at 0, 0.1 1, 3, 5, 10 15, 20 30 40 and 50 MWd/kgU burnup. The coefficient matrix has four dimensions. The first row defines boron concentration branches: nominal, low and high (see the example on material replacement branch above). The second and third row define variations in coolant and fuel temperature (see the examples on state variation branches above). The last row defines insertion of control rods (see the example on universe replacement branch above). The first entry in each row is a branch with no variations at all, which corresponds to the nominal conditions in the burnup calculation. Since the matrix has four dimensions, each run involves a combination of four branches. In total the matrix defines 3 $\times$ 3 $\times$ 3 $\times$ 2 = 54 combinations, of which the first 10 are: 1 nom + nom + nom + nom (nominal state) 2 Blo + nom + nom + nom (boron at low concentration) 3 Bhi + nom + nom + nom (boron at high concentration) 4 nom + Clo + nom + nom (coolant at low temperature) 5 Blo + Clo + nom + nom (boron at low concentration and coolant at low temperature) 6 Bhi + Clo + nom + nom (boron at high concentration and coolant at low temperature) 7 nom + Chi + nom + nom (coolant at high temperature) 8 Blo + Chi + nom + nom (boron at low concentration and coolant at high temperature) 9 Bhi + Chi + nom + nom (boron at high concentration and coolant at high temperature) 10 nom + nom + Flo + nom (fuel at low temperature) ... As mentioned above, some core simulator codes do not require all combinations of branches. To avoid running unnecessary combinations it is possible to define a one-dimensional list of branches, for example: coef 11 0 0.1 1 3 5 10 15 20 30 40 50 10 BR01 BR02 BR03 BR04 BR05 BR03 BR07 BR08 BR09 BR10 and set up the combinations in the branch cards. This may complicate the input, but also save a lot of computing time since only branches that are used for the input data are run. ## Output The group constant data from the automated burnup sequence is written in a separate output file named "<input>.coe". The structure of the file is intended to be easily read by post-processing scipts, as described below. The parameters included in the output are selected using the set coefpara card (see also the parameter names in group constant generation). ### Passing additional information with variables Additional information useful for the post-processing script can be passed from the input directly into the output using variables. For example: % --- Branch to low fuel temperature: branch Flo stp fuel -10.338 600 var TFU 600 Defines variable "TFU" and sets it to value "600" when the fuel temperature branch is invoked. When the post-processing script reads the the data, it has the information that this is a branch to lower fuel temperature. Variables are also useful for passing history information. This can be done by defining a branch with varibables only: % --- History variables: branch HIS var BHI 450 % Boron concentration 450 ppm var WHI 550 % Coolant temperature 550K and including this branch in the coefficient matrix as its own single-valued dimension: coef 11 0 0.1 1 3 5 10 15 20 30 40 50 10 BR01 BR02 BR03 BR04 BR05 BR03 BR07 BR08 BR09 BR10 1 HIS The same information is then included in the output at every branch. ### Output format The .coe output file is divided into blocks, and each block provides the group constant data from a single run: (loop over runs) Itot Ntot Icoe Ncoe Nuni Nbra BRA1 BRA2 ... Nvar VAR1 VAL1 VAR2 VAL2 ... BU Ibu Nbu (loop over universes) UNI Npara (loop over group constant data) PARA Nval VAL1 VAL2 ... ... (end loop over group constant data) ... (end loop over universes) ... (end loop over runs) The block begins with line: Itot Ntot Icoe Ncoe Nuni where the values are: Itot : Run index Ntot : Total number of runs (number of branch combinations times number of burnup points) Icoe : Coefficient calculation index Ncoe : Total number of coeffient calculations (number of branch combinations in the coefficient matrix) Nuni : Number of universes in a single run The second line provides information on the branches invoked in current run: Nbra BRA1 BRA2 ... where the values are: Nbra : The number of branches invoked (dimension of the coefficient matrix) BRAn : The branch names (as provided in the branch card) The third line lists the variables: Nvar VAR1 VAL1 VAR2 VAL2 ... where the values are: Nvar : The number of variable name-value pairs that follow VARn : The variable names (as defined in the branch cards) VALn : The variable values (as assigned in the branch cards) Three variables are always printed: VERSION, DATE and TIME, which provide the code version used, and the date and time the calculation was completed. The 4th line provides information on the burnup points: BU Ibu Nbu where the values are: BU : Burnup (as listed in the coef card) Ibu : Burnup index Nbu : Total number of burnup points The common header part is followed by a loop over Nuni universes. The first line in each subsection is: UNI Npara where the values are: UNI : The name of the universe where group constant calculation was performed Npara : Total number of output parameters that follow This is then followed by Npara lines of group constant data. Each line has the format: PARA Nval VAL1 VAL2 ... where the values are: PARA : The name of the output variable (as listed in the "set coefpara" card as well as the _res.m output) Nval : Total number of values that follow VALn : The output values The order in which the blocks are printed out depends on how the branches are listed in the coefficient matrix. Serpent runs all burnup points for each branch combination before moving on to the next combination, so the loop over burnup points forms the inner and loop over branch combinations the outer loop. The number of output values and the order in which the values are printed depends on the parameters (see the parameter names in group constant generation)
2022-06-29 12:14:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5316401124000549, "perplexity": 4173.937902818372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103639050.36/warc/CC-MAIN-20220629115352-20220629145352-00123.warc.gz"}
http://aimpl.org/engelstr/2/
## 2. Submanifolds in Engel manifolds 1. #### Problem 2.1. Can properties of the space of closed Engel curves be used to distinguish Engel structures? In particular for the standard Engel structure on $\mathbb{R}^4$, is there a difference between homotopy of Engel loops and formal homotopy of Engel loops? 1. Remark. Perhaps one should exclude rigid curves from the discussion. [bryant] • Remark. E. Murphy conjectures that non-rigid Engel knots obey an h-principle. • #### Problem 2.2. Is there an h-principle for transverse embedded surfaces in a given Engel structure? 1. Remark. Already the following special case is interesting. Take a contact 3-manifold and two transverse knots that are not transversely isotopic but have equal formal invariants. Are they transverse Engel concordant through a surface in a standard Engel structure on the product of the 3-manifold with the interval? Cite this as: AimPL: Engel structures, available at http://aimpl.org/engelstr.
2018-07-17 05:55:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6095117330551147, "perplexity": 2393.268117876352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589573.27/warc/CC-MAIN-20180717051134-20180717071134-00439.warc.gz"}
https://scottaaronson.blog/?p=1628
## Merry Christmas! My quantum computing research explained, using only the 1000 most common English words [With special thanks to the Up-Goer Five Text Editor, which was inspired by this xkcd] I study computers that would work in a different way than any computer that we have today.  These computers would be very small, and they would use facts about the world that are not well known to us from day to day life.  No one has built one of these computers yet—at least, we don’t think they have!—but we can still reason about what they could do for us if we did build them. How would these new computers work? Well, when you go small enough, you find that, in order to figure out what the chance is that something will happen, you need to both add and take away a whole lot of numbers—one number for each possible way that the thing could happen, in fact. What’s interesting is, this means that the different ways a thing could happen can “kill each other out,” so that the thing never happens at all! I know it sounds weird, but the world of very small things has been known to work that way for almost a hundred years. So, with the new kind of computer, the idea is to make the different ways each wrong answer could be reached kill each other out (with some of them “pointing” in one direction, some “pointing” in another direction), while the different ways that the right answer could be reached all point in more or less the same direction. If you can get that to happen, then when you finally look at the computer, you’ll find that there’s a very good chance that you’ll see the right answer. And if you don’t see the right answer, then you can just run the computer again until you do. For some problems—like breaking a big number into its smallest parts (say, 43259 = 181 × 239)—we’ve learned that the new computers would be much, much faster than we think any of today’s computers could ever be. For other problems, however, the new computers don’t look like they’d be faster at all. So a big part of my work is trying to figure out for which problems the new computers would be faster, and for which problems they wouldn’t be. You might wonder, why is it so hard to build these new computers? Why don’t we have them already? This part is a little hard to explain using the words I’m allowed, but let me try. It turns out that the new computers would very easily break. In fact, if the bits in such a computer were to “get out” in any way—that is, to work themselves into the air in the surrounding room, or whatever—then you could quickly lose everything about the new computer that makes it faster than today’s computers. For this reason, if you’re building the new kind of computer, you have to keep it very, very carefully away from anything that could cause it to lose its state—but then at the same time, you do have to touch the computer, to make it do the steps that will eventually give you the right answer. And no one knows how to do all of this yet. So far, people have only been able to use the new computers for very small checks, like breaking 15 into 3 × 5. But people are working very hard today on figuring out how to do bigger things with the new kind of computer. In fact, building the new kind of computer is so hard, that some people even believe it won’t be possible! But my answer to them is simple. If it’s not possible, then that’s even more interesting to me than if it is possible! And either way, the only way I know to find out the truth is to try it and see what happens. Sometimes, people pretend that they already built one of these computers even though they didn’t. Or they say things about what the computers could do that aren’t true. I have to admit that, even though I don’t really enjoy it, I do spend a lot of my time these days writing about why those people are wrong. Oh, one other thing. Not long from now, it might be possible to build computers that don’t do everything that the new computers could eventually do, but that at least do some of it. Like, maybe we could use nothing but light and mirrors to answer questions that, while not important in and of themselves, are still hard to answer using today’s computers. That would at least show that we can do something that’s hard for today’s computers, and it could be a step along the way to the new computers. Anyway, that’s what a lot of my own work has been about for the past four years or so. Besides the new kind of computers, I’m also interested in understanding what today’s computers can and can’t do. The biggest open problem about today’s computers could be put this way: if a computer can check an answer to a problem in a short time, then can a computer also find an answer in a short time? Almost all of us think that the answer is no, but no one knows how to show it. Six years ago, another guy and I figured out one of the reasons why this question is so hard to answer: that is, why the ideas that we already know don’t work. Anyway, I have to go to dinner now. I hope you enjoyed this little piece about the kind of stuff that I work on. ### 44 Responses to “Merry Christmas! My quantum computing research explained, using only the 1000 most common English words” John Sidles, meet Up-Goer Five Text Editor. 2. rrtucci Says: That’s the difference between you and me. I don’t need no stinking editor to sound like a 5 year old. Let the D-wave wild rumpus begin. 3. Pat Says: what is the work that “Six years ago, another guy and I figured out one of the reasons why this question is so hard to answer” refers to? 4. Job Says: Scott, in one of your Q&As a while ago i asked a question but didn’t phrase it properly. In the spirit of prioritizing effective communication, let me retry. NP-Complete problems often ask about the existence of a value in an implicit set. That set might be, for example, the set of all subgraphs of an input graph G (e.g. for sub-graph isomorphism). The implicit set is usually derived from an input value, of length n. For example, the problem might receive n numbers and ask whether a subset of those numbers has a specific property (e.g. sums to x). We can define similar problems that have much larger implicit sets. My question is, do you think such problems are still in NP or even NP-Complete? In more concrete terms, here’s a problem similar to subset sum which has a much larger implicit set: Given a number X, and a set of n numbers, S_0: – let S_1 be the set of all possible sums of numbers in S_0 – let S_2 be the set of all possible sums of n or fewer numbers in S_1 – let S_k be the set of all possible sums of n or fewer numbers in S_[k-1] …does S_k (for some constant, pre-defined k) contain X? IMO this class of problem, despite having a much larger implicit input set (i.e. greater than 2^n) is still in NP – the certificate is the set of numbers for each set, so no more than n^k. What are your thoughts on this? 5. Scott Says: Pat #3: what is the work that “Six years ago, another guy and I figured out one of the reasons why this question is so hard to answer” refers to? 6. Scott Says: Sorry, just kidding. The answer is that, yes, the type of modifications you’re talking about still leave the problem in NP, and for precisely the reason you say: the witness is still polynomial-size. 7. Scott Says: John Sidles, meet Up-Goer Five Text Editor. LOL! Except that, with the Sidles Up-Goer Five Text Editor, you would have to conduct all human communication using only the words “STEM,” “21st-century,” “manifold,” “Thurston,” … 😀 8. Scott Says: Incidentally, I got curious: of the 1000 words I was allowed, how many did I actually use? The answer, by my count, is 232. OK, now back to real work. 🙂 9. Ronak M Soni Says: Is it really fair to use common words in uncommon meanings, like you did ‘bit’ and ‘state?’ Kinda beats the point, no? 10. Gil Kalai Says: Let me try too: Many years ago people thought about what computers can do and what computers can check. They had two roads to go: The first was to believe that computers can do everything that computers can check. This could be a great! The second was to believe that computers can not do everything that computers can check. This is not so great but very interesting! But this is what most people believe and try to understand. It will be great to understand it! But even this is very hard. A little later people thought about a new kind of computers that can do a lot of things that our real computers can not do. Again they had two roads to go: The first was to believe that these new kind of computers can be built. This could be great! The second was to believe that these new kind of computers can not be built. This would not be so great but again it would be interesting and it would be great to understand the reasons. This time many people took the wrong road and believed that the new kind of computers can be built. They took the wrong road but they gave us a good ride for our money. Maybe they took the wrong road because they believed some guy with beautiful thoughts. But this guy was joking or just wrong. Anyway, the road people took was the wrong road and my work is to explain why. My work has three parts. The first part is to explain what was stupid in what people thought: People missed that a computer is different from a car! The second part is to draw a different picture. This is hard and takes a lot of time. The third part is to explain why not to believe in these new computers: One reason is that the new computers are able go back in time in such a way we never saw before. All this is very exciting! 11. Scott Says: Ronak #9: Is it really fair to use common words in uncommon meanings, like you did ‘bit’ and ‘state?’ Eh, if you wanted to nitpick, you could also complain about compound forms like “day to day life” and “open problem,” the use of “reason” as a verb, or the use of numerals. But rather than endlessly adjusting the bar, I decided to stick with a simple standard: namely, could I get the Up-Goer Five editor to accept my text, through correct English use of its thousand allowed words? Try it — it’s fun! 12. Yuval Says: Scott #11 and Ronak #9: I agree with “reason” as a verb and “state”, but I actually think “bit” is fine. If you think of “bit” in its common definition of “little piece of,” the sentence not only makes sense, but means exactly what it should. It’s a neat coincidence that the more precise, esoteric word has the same spelling and pronunciation — but that doesn’t necessarily mean that using the common definition is wrong. 13. Reza Says: Gil #10: “Maybe they took the wrong road because they believed some guy with beautiful thoughts. But this guy was joking or just wrong.” Some guy: Do you mean Richard Feynman from Caltech? 14. Attila Szasz Says: I admit my following question is not very fair, since its christmas on one hand, and on the other hand im pretty sure you’re getting lots of annoying questions about the topic, mine probably included; but still i was just wondering wheter you’ve developed some insights regarding this borel-determinacy-stuff approach of Gowers’ in connection with algebraization? Do his criteria for the suitable complexity measure have some chances/promises of opening up the blackbox wider, in the sense of being more powerful than just carrying through oracle extensions? (maybe like in 11.2 in your paper, suggesting the ability to capture non-obvious structure) 15. Scott Says: Attila: Sorry, I simply haven’t studied Gowers’s ideas enough to be able to express an informed opinion. I think Gowers would be the first to tell you that (a) the analogy between Borel determinacy and circuit complexity was already made, by Sipser in the 1980s, and in that case it led to AC0 lower bounds but not further than that, and (b) Gowers has some ideas for how to maybe get beyond AC0, but they’re at an extremely speculative stage right now. I really admire Gowers for putting his speculative ideas (clearly labeled as such) out there—not everyone has the courage to do that, and I wish more people did. In return, I think, we should do him the favor of not treating his musings like published papers. Now regarding algebrization, I’d say that Ryan Williams’s NEXP⊄ACC result has already convincingly gotten around it, so we know that it’s not some impenetrable black box. Closer to the question at hand, the original PARITY∉AC0 results from the 1980s—the ones vaguely inspired by Borel determinacy—also got around relativization and algebrization, to whatever extent those barriers even make sense at such a low level. The problem was “merely” that those results fell prey to natural proofs. So, despite not understanding Gowers’s ideas, I suppose I’ve arrived at the tentative conclusion that the more relevant question is not how they get around relativization or algebrization, but rather how they get around natural proofs. 16. Attila Szasz Says: Yeah, sorry, thats what I meant by annoying, I totally get that those are rudimentary ideas, and he warns everyone not to do the harmful hype hysteria. I couldn’t resist, but actually, I had a more general question in mind, which is now immediate from your reply, so let me phrase it: How much can these barriers be independent of each other? I realize that approaching PvsNP from the smaller classes’ direction has the naturalproof barrier ‘by default’, and algebraization is more relevant for bigger class separations. So in the scenario that somehow you manage to be smart say on the level of NEXP vs P/poly, requiring non-algebrizing stuff, it seems reasonable that at this point you hope to improve on your method which ‘automatically’ escapes estimating a function too directly so you can take your shot at P NP. But given this would be a separation result, and you’re also separating from the from-small-classes approach aren’t you in fact bound to show this quite explicitly in the process? (Incidentally, by now i expect answers to this more or less either in the form “Dude, no one knows.” or one that points out I’m missing something crucial.) 17. Rahul Says: Tried my hand at Up-Goer Five Text Editor….. 🙂 The work these people do is saying what are all the great things we could do if we had a great great “new kind of computer”. In real life such a computer is not made yet. No one is even close to making it. Many people pretend we are close to such a computer. But we are not. There are many real problems in making such a computer. Most good people who study them understand this and agree. Other people want money or other things. So they lie and pretend to the world that this “new kind of computer” is right around the corner. I think what these people study is like a game. They try to see what’s the best they could do for us within this game. Games are fun. Knowing about our world is also important. But we have to decide how much time, money and attention we give these people. I think we are giving them too much right now. How much money would you give someone to make you sure that this “new kind of computer” can not be built? Maybe we will learn some interesting things along the way. But it might be easier to try to learn them in other ways. We have to learn to trust those who are good and do not lie about what their “new kind of computer” can do yet. We must also spend more on trying to actually make this “new kind of computer” and spend less on imagining what sorts of nice things we could do if only we had this “new kind of computer”. Imagine if hundred years before the first car was actually built hundreds of ten hundreds of people were given lots of time and money to tell you what sorts of great and fun things you could do if only you had a car. These people have brothers who actually try to make today’s computers do more things and faster by thinking of new ways to do these same things. I think these brothers do important things but they do not get so much attention. That is a little sad. 18. J Says: Let N be a positive integer. Let $M=\lceil\sqrt{N}\rceil$ and $M+1=\lceil\sqrt{N}\rceil + 1$ not divide N. Is the problem of deciding whether Z is $H! \mod N$ for some 0<H<M in NP or Quantumn version of NP? Can the problem be in Quantum polynomial time? 19. Scott Says: Rahul #17: I find your argument extremely persuasive—assuming, of course, that we’re both talking about Bizarro-World, the place where quantum complexity research commands megabillions and is regularly splashed across magazine covers, while Miley Cyrus’s twerking is studied mostly by a few dozen nerds who can all fit in a seminar room at Dagstuhl. 20. Rahul Says: Scott #19: Is “megabillions” the new minimum standard for over-funding a scientific research area? 21. Rahul Says: Scott: To be less frivolous either: (a) You think it’s axiomatic that no legitimate research area can ever be considered over-funded (till it gets “megabillions”?)? Do you? (b) You agree that certain areas may at certain times be over-funded. In which case we can discuss which areas these are. Obviously, the assessment is subjective. I’ve no grudge if a rich millionaire were using his own purse to fund pretty much anything, even research on homeopathy or clairvoyance. But if it is public money I think allocation profile is fair game for debate. Unless, of course, you think funding money is a non scarce resource. But I doubt that’s your view. 22. Joshua Zelinsky Says: As long as we’re discussing algebraization, R. J. Lipton just had a post http://rjlipton.wordpress.com/2013/12/26/re-gifting-an-old-theorem/ about how every circuit has a corresponding circuit with the same behavior but the second circuit uses very few not gates and is only slightly larger than the first circuit. I wonder if something like this can be used to get past the algebraization barrier since the second circuit’s small number of not gates could act effectively as a restriction for what the circuit’s algebraized form has to look like so that they can’t just be generic polynomials. 23. Fred Says: Hi Scott, when you say “If it’s not possible, then that’s even more interesting to me than if it is possible!” Would it still be really that interesting to you if the impossibility was purely from an engineering/economics perspective? For example if the cost of developing a practical quantum computer was prohibitively high for the practical benefits it would bring. Imagine that developing a non trivial QC would be on the scale of building the LHC – that’s clearly too costly if we can’t show beforehand that it could be used for something better than factoring large numbers or quantum annealing. But maybe we’re still at the stage where the engineering limitations aren’t clear enough? For the case of using nuclear fusion as a practical source of energy it seemed for a while that maybe it could be done cheaply with “cold fusion” and whatnot (but in that case the huge costs of trying to build an effective “Tokamak” reactor clearly don’t outweigh the potential long term benefits). 24. Scott Says: Fred #23: What you’re talking about is not QC being impossible, in the sense that skeptics like Gil Kalai (see above), Leonid Levin, Michel Dyakonov, or Robert Alicki mean by “impossible.” (To ward off precisely this confusion, I usually say something like “fundamentally impossible” or “impossible in principle,” but those concepts are hard to express in Up-Goer Five. 🙂 ) You’re just talking about QC being sufficiently hard that people decide to give up. Yes, of course that would kind of suck. But at least we would’ve learned a lot about quantum mechanics and computation, and there would’ve been other spinoffs for science and technology (there always are, and there have already been some). In my personal opinion, proving that QC can work would be at least as interesting—purely for fundamental physics, setting aside all applications!—as the discovery of the Higgs boson was. And our civilization (OK, mostly the EU) decided that finding the Higgs boson was worth $11 billion. It’s hard for me to understand how people can simultaneously believe that that was justified, and that spending a comparable amount to prove the reality of beyond-classical computational power in our universe wouldn’t be. But maybe the Rahuls of this world were also against the LHC, in which case their position would at least be consistent. (Incidentally, other projects that strike me as LHC-complete include a sample-return mission to Mars, and cool space-based physics like gravity-wave detectors at the Lagrange points of the solar system.) Now regarding nuclear fusion, I think the real tragedy is that essentially everything that’s hoped for from fusion energy, can already be had right now from modern fission reactors. The reasons why we don’t get almost all of our power from fission are probably less technological than they are social, political, and psychological. (E.g., we’re unwilling to accept meltdowns that occasionally kill or poison dozens of people, even though we do accept fossil-fuel use that’s destroying the entire planet.) 25. Gil Kalai Says: The very first words of Aram Harrow in our debate regarding quantum computers were: “There are many reasons why quantum computers may never be built. We may fail to bring the raw error rate below the required threshold while controlling and interacting large numbers of qubits. We may find it technically possible to do this, but too costly to justify the economic value of the algorithms we are able to implement. We might even find an efficient classical simulation of quantum mechanics, or efficient classical algorithms for problems such as factoring. The one thing I am confident of is that we are unlikely to find any obstacle in principle to building a quantum computer.” Aram’s sentiment is very similar to Scott’s comment #24. On this matter I disagree with Aram and Scott. One thing I am confident about is that if quantum computers will never be built then we will be able to understand the principle behind it. We will be able to find the principle (or the appropriate mathematical modeling) for it if it is a fundamental physics principle, and we will be able to find the principle (or the appropriate mathematical modeling) if it is “just” an engineering matter. As a mathematician, the challenge is to find the appropriate modeling, and it almost makes no difference one way or the other. (OK, of course it will be more pleasing to stumble upon something fundamental, but the mathematical challenge and interest is separate from this matter.) As Scott wrote, indeed I regard quantum computers as implausible and I think this is the way things will turned out, but as a mathematician I find the question of how to model quantum evolutions without quantum fault-tolerance fascinating whether it represents the border between reality and fantasy or if it represents the border between present and future technology. Whether you are skeptics or a believer, an explanation is needed for why nature makes quantum fault tolerance and universal quantum computation so really hard. As an aside let me comment that the problem with the terms “obstacle in principle,” “fundamentally impossible” or “impossible in principle,” is not just that they are not expressed in Up-Goer Five. A more serious problem is that they don’t have a definite formal meaning. . 26. Raoul Ohio Says: 1. I am not a fission power hater, but FE does have a LOT of problems, and certainly will not fulfill the “clean, too cheap to meter” fusion hype from the last 50+ years. The sad reality is that all power sources have a lot of problems, and are usually limited. 2. I have no doubt that 100’s or 1000’s of scientists are sure that their specialty could do great stuff with$11G or so from the EU. But I think there is widespread agreement that physics looks into fundamental stuff that is at the bottom level. This kind of boils down to: “Are the QCD rules correct? Complete? The LHC is not a “find the Higgs” machine, it is the biggest microscope yet, to see what is there, and if the rules apply. The first thing the LHC has found might be the Higgs. It also might not. 27. Fred Says: One of the difficulty in this debate is that it’s not clear what sort of minimum quantum circuit would clearly qualify as a computer (a custom quantum circuit that factorizes 15=5×3 is more a calculator). Early classical computers were limited (laughable by today’s standards) but still immediately useful as general-purpose computing systems (with an emphasis on programmability). The improvement of the hardware has been so amazing that we tend to forget that classical computers are limited in many ways – minimum energy thresholds, the speed of light, quantum noise (transistors leak and erode, etc), miniaturization (at best one transistor == one atom). Those aren’t just engineering difficulties but inherent theoretical limits. Today those limitations are pushing back in the other direction (no more free lunch) forcing software engineers to be aware of the underlying hardware (break of abstraction) and try to rewrite their code to take advantage of hundreds of CPU cores, etc (we see a resurgence of functional programming languages, which makes parallelization somewhat less painful). QC seems trickier because the approach is bottom-up from the start, using particle state to store qbits, rather than using large clumps of matter to store bits, then refining little by little. But in the end I wouldn’t be surprised if the two technological efforts meet, hitting the same physical limitations and solutions. 28. GDS Says: tl;dr Now we need a tweet editor to reduce it to <=140 characters. 29. GASARCH Says: Your post seems to imply that QC gives speedup for factoring but not much else. When I made the same claim in a recent post I was refered to Quantum Algorithms for algebraic problems by childs and dam which has MANY algebraic problems with super-poly speedup. So— are there really MANY algebraic problem s with super-poly speedup? I was also told that quantum simulations are also an application of QC (I HAD mentioned this in my blog, but only briefly). Not sure I have a question here- just want you to comment on these other apps. 30. Scott Says: GASARCH #29: Yes, there are quantum speedups for lots of number-theoretic, algebraic, and (abelian) group-theoretic problems other than factoring. Which is a difficult fact to express in the Up-Goer Five editor. 31. Mark Thomas Says: Nice and clear explanation. A question: Could a subtle gravitational wave passing through the quantum computer decohere the superpostion states?. 32. Scott Says: Mark Thomas #31: In principle, sure. In practice, I’m told that gravitational decoherence is so many orders of magnitude smaller than other, more “prosaic” decoherence sources that it’s never anything to worry about. (And of course, if the failure of your quantum computer to work led instead to the first direct experimental detection of gravity waves, I believe that would count as an experimental success! 😀 ) Here I’m not talking about the exotic speculations of e.g. Roger Penrose, according to which gravitational effects would ultimately change quantum mechanics itself. If true, that could certainly affect quantum computing, but so far there’s not the slightest evidence for it. (Again, though, another reason to care about quantum computing is that, if there were anything that modified quantum mechanics, we’d like to push QM to its furthest limits and discover what that something is.) 33. Gil Kalai Says: I certainly support making large investments in trying to demonstrate (or refute) beyond-classical computational power. In my opinion, it is a mistake to identify “scientific value” with “attention” and with “resources”. Both resources and attention (of various forms) are instruments rather than targets in science, and they depend (and should depend) on various other factors. 34. Rahul Says: @Gil Can you expand on what you mean by your comment on value, attention & resources? It’s intriguing but I may not be understanding it fully. 35. Gil Kalai Says: Rahul, when it come to “resources,” beside “scientific value” a crucial ingredient is to what extent and in what way resources are useful to advance the project. When it comes to attention there is a vast difference between various scientific projects in the way they appeal to large audiences (within and outside science). 36. Fred Says: This new article quotes Scott, but I’m not sure if the quote is new http://m.washingtonpost.com/world/national-security/nsa-seeks-to-build-quantum-computer-that-could-crack-most-types-of-encryption/2014/01/02/8fff297e-7195-11e3-8def-a33011492df2_story.html 37. Scott Says: Fred #36: Yeah, they interviewed me. The article itself is basically fine (except that, in the paragraph about “how QC works,” I wish it had said something about interference of amplitudes, rather than “avoiding unnecessary calculations.”). But the comments on the article just spoiled my day. They feature a guy who says that, since the article didn’t explain to his personal satisfaction how a QC would work, the idea must be bunk; a troll who replies to every comment referring to QC in future tense by saying that QCs already exist, and linking to the Wikipedia article about D-Wave as proof … OK, let me stop before I get even more depressed. 38. Fred Says: Scott #37: haha, yeah, internet comments… you pretty much have to ignore them. 39. Jr Says: Fred #38: With certain exceptions of course. 40. Gil Kalai Says: All along, my goals in my skeptical pursuit of quantum computers were to understand the matter and also what other people thoughts are, and to describe via mathematical modeling and reasoning an alternate theory of quantum noise – a theory that will draw the “quantum fault-tolerance barrier,” explain why computationally superior quantum computers are impossible, and will have further major consequences for quantum physics as a whole. Looking at it now, from the perspective of this post (and #10), it looks that I was selling myself short. My newly expanded goals would be, in addition, to have all these eleven words: smoothed, Lindblad, evolution, trivial, flaw, detrimental, quantum , noise, spontaneous, error, and synchronization, making it to the list of 1000 most used English words! 🙂 It was nice, Scott, to meet you again in Jerusalem, Thanks for your excellent lectures. 41. Eric Cordian Says: Does the AdS/CFT correspondence, in which gravity in anti-de Sitter space is equivalent to a Yang-Mills quantum theory on the boundary, offer any insights as to whether quantum computing is actually more powerful than classical computing? Could this correspondence be used to map a quantum computer on the boundary into a classical computer on the higher dimensional curved multi-connected manifold? 42. Scott Says: Eric Cordian #41: Good questions! I posed your questions to Daniel Harlow, a quantum gravity expert of my acquaintance, and he replied as follows: 1) The bulk theory is also quantum, so the duality is between two quantum theories. The bulk theory is classical in a certain limit, but just in the usual way that the standard model is classical in a certain limit. The only nontrivial point is that the classical limit is one that you wouldn’t necessarily want to call classical from the boundary CFT point of view. 2) The super Yang-Mills theory should be efficiently emulatable on an ordinary quantum computer, with overhead which is some low order polynomial in N (the N of SU(N)). This hasn’t been shown rigorously, but I expect it to be the case. I can’t be sure that there isn’t some crazy kind of algorithm involving throwing stuff into black holes, etc, but these two points make me think that AdS/CFT is unlikely to say much about the structure of complexity theory. 43. Digressions loosely related to Scott Aaronson | The Daily Pochemuchka Says: […] Here are three short essays on quantum computing explained to a lay audience using only the ten hundred most commonly used words in English: […] 44. My research explained, using only the 1000 most common English words | Short, Fat Matrices Says: […] by Scott, Afonso and Joel, using the Up-Goer Five Text Editor, which in turn was inspired by this xkcd. I […]
2023-03-20 19:33:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5315284729003906, "perplexity": 957.5563958564815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943555.25/warc/CC-MAIN-20230320175948-20230320205948-00104.warc.gz"}
https://zbmath.org/?q=an%3A0921.62063
# zbMATH — the first resource for mathematics On residual empirical distribution functions in ARCH models with applications to testing and estimation. (English) Zbl 0921.62063 Let $$\{y_i\}$$ be an ARCH(1) model, i.e. $y_i=\sigma_i \varepsilon_i,\quad\sigma^2_i= \alpha_0+ \alpha_1y^2_{i-1}, \quad i\in\mathbb{Z},$ where the $$\{\varepsilon_i\}$$ are i.i.d. random variables with unknown distribution function $$G(x)$$, $$E\varepsilon^2_1=1$$, $$\alpha_0>0$$, $$0\leq\alpha_1 <1$$; $$\alpha= (\alpha_0,\alpha_1)$$ is an unknown parameter vector. The author considers the residuals $$\varepsilon_k(\theta)= y_k(\theta_0+ \theta_1y^2_{k-1})^{-1/2}$$, $$k=1,\dots,n$$, and the residual empirical distribution function $$G_n(x,\theta)= n^{-1}\sum^n_{k=1} I(\varepsilon_k (\theta)\leq x)$$. Then it is proved that $\sup_{x\in R^1, |\tau|\leq \theta}\biggl| n^{1/2}\bigl[ G_n(x,\alpha+ n^{-1/2} \tau)-G_n(x,\alpha) \bigr]-2^{-1}xg(x){\mathbf E}[(\tau_0+ \tau_1 y_1^2)/(\alpha_0+ \alpha_1y^2_1) ]\biggr|=o_p(1).$ This interesting result is used for testing and robust rank estimation of unknown parameters. ##### MSC: 62G30 Order statistics; empirical distribution functions 62G35 Nonparametric robustness 62M10 Time series, auto-correlation, regression, etc. in statistics (GARCH) 62G05 Nonparametric estimation
2021-08-01 07:45:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7623822093009949, "perplexity": 1028.030793988685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154163.9/warc/CC-MAIN-20210801061513-20210801091513-00223.warc.gz"}
https://www.ademcetinkaya.com/2023/03/quot-quotient-technology-inc-common.html
Outlook: Quotient Technology Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Wait until speculative trend diminishes Time series to forecast n: 11 Mar 2023 for (n+4 weeks) Methodology : Modular Neural Network (DNN Layer) ## Abstract Quotient Technology Inc. Common Stock prediction model is evaluated with Modular Neural Network (DNN Layer) and Lasso Regression1,2,3,4 and it is concluded that the QUOT stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Wait until speculative trend diminishes ## Key Points 1. Market Risk 2. What are buy sell or hold recommendations? 3. Can machine learning predict? ## QUOT Target Price Prediction Modeling Methodology We consider Quotient Technology Inc. Common Stock Decision Process with Modular Neural Network (DNN Layer) where A is the set of discrete actions of QUOT stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Lasso Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (DNN Layer)) X S(n):→ (n+4 weeks) $R=\left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right)$ n:Time series to forecast p:Price signals of QUOT stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## QUOT Stock Forecast (Buy or Sell) for (n+4 weeks) Sample Set: Neural Network Stock/Index: QUOT Quotient Technology Inc. Common Stock Time series to forecast n: 11 Mar 2023 for (n+4 weeks) According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Wait until speculative trend diminishes X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for Quotient Technology Inc. Common Stock 1. Annual Improvements to IFRSs 2010–2012 Cycle, issued in December 2013, amended paragraphs 4.2.1 and 5.7.5 as a consequential amendment derived from the amendment to IFRS 3. An entity shall apply that amendment prospectively to business combinations to which the amendment to IFRS 3 applies. 2. The characteristics of the hedged item, including how and when the hedged item affects profit or loss, also affect the period over which the forward element of a forward contract that hedges a time-period related hedged item is amortised, which is over the period to which the forward element relates. For example, if a forward contract hedges the exposure to variability in threemonth interest rates for a three-month period that starts in six months' time, the forward element is amortised during the period that spans months seven to nine. 3. The following are examples of when the objective of the entity's business model may be achieved by both collecting contractual cash flows and selling financial assets. This list of examples is not exhaustive. Furthermore, the examples are not intended to describe all the factors that may be relevant to the assessment of the entity's business model nor specify the relative importance of the factors. 4. An entity is not required to restate prior periods to reflect the application of these amendments. The entity may restate prior periods only if it is possible to do so without the use of hindsight. If an entity restates prior periods, the restated financial statements must reflect all the requirements in this Standard for the affected financial instruments. If an entity does not restate prior periods, the entity shall recognise any difference between the previous carrying amount and the carrying amount at the beginning of the annual reporting period that includes the date of initial application of these amendments in the opening retained earnings (or other component of equity, as appropriate) of the annual reporting period that includes the date of initial application of these amendments. *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions Quotient Technology Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Quotient Technology Inc. Common Stock prediction model is evaluated with Modular Neural Network (DNN Layer) and Lasso Regression1,2,3,4 and it is concluded that the QUOT stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Wait until speculative trend diminishes ### QUOT Quotient Technology Inc. Common Stock Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementCaa2Caa2 Balance SheetCBaa2 Leverage RatiosBaa2Caa2 Cash FlowB2C Rates of Return and ProfitabilityB2Ba1 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 89 out of 100 with 691 signals. ## References 1. Çetinkaya, A., Zhang, Y.Z., Hao, Y.M. and Ma, X.Y., When to Sell and When to Hold FTNT Stock. AC Investment Research Journal, 101(3). 2. A. Tamar, Y. Glassner, and S. Mannor. Policy gradients beyond expectations: Conditional value-at-risk. In AAAI, 2015 3. G. Konidaris, S. Osentoski, and P. Thomas. Value function approximation in reinforcement learning using the Fourier basis. In AAAI, 2011 4. M. Colby, T. Duchow-Pressley, J. J. Chung, and K. Tumer. Local approximation of difference evaluation functions. In Proceedings of the Fifteenth International Joint Conference on Autonomous Agents and Multiagent Systems, Singapore, May 2016 5. Bamler R, Mandt S. 2017. Dynamic word embeddings via skip-gram filtering. In Proceedings of the 34th Inter- national Conference on Machine Learning, pp. 380–89. La Jolla, CA: Int. Mach. Learn. Soc. 6. Hastie T, Tibshirani R, Friedman J. 2009. The Elements of Statistical Learning. Berlin: Springer 7. Sutton RS, Barto AG. 1998. Reinforcement Learning: An Introduction. Cambridge, MA: MIT Press Frequently Asked QuestionsQ: What is the prediction methodology for QUOT stock? A: QUOT stock prediction methodology: We evaluate the prediction models Modular Neural Network (DNN Layer) and Lasso Regression Q: Is QUOT stock a buy or sell? A: The dominant strategy among neural network is to Wait until speculative trend diminishes QUOT Stock. Q: Is Quotient Technology Inc. Common Stock stock a good investment? A: The consensus rating for Quotient Technology Inc. Common Stock is Wait until speculative trend diminishes and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of QUOT stock? A: The consensus rating for QUOT is Wait until speculative trend diminishes. Q: What is the prediction period for QUOT stock? A: The prediction period for QUOT is (n+4 weeks) ## People also ask What are the top stocks to invest in right now?
2023-03-26 13:23:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5192683935165405, "perplexity": 7801.4485704014905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00625.warc.gz"}
http://math.stackexchange.com/questions/119372/simplify-arithmetic-equation
# Simplify arithmetic equation I have to solve the system : \begin{align*} a+b &= S\\ a\times b&= P\\ \end{align*} Someone told me it was equivalent to solve the equation $x^2-S.x+P=0$. I think it's linked with the formula of sum and product of the roots of a second-degree polynomial but I can't figure out why exactly this works. - From the first equation you have $b=S-a$; substituting this into the second equation gives you $a(S-a)=P$, which is easily rearranged to yield $a^2-aS+P=0$. This tells you that $a$ must be a solution of $x^2-Sx+P=0$. Since the original system is symmetric in $b$, you know that the same must be true of $b$. And since $x^2-Sx+P=0$ has just two solutions, they must be precisely $a$ and $b$. And indeed you can write out the factorization: $$(x-a)(x-b)=x^2-(a+b)x+ab=x^2-Sx+P\;.$$ - Thank you for your answer ! – Skydreamer Mar 12 '12 at 19:47 The above does not correctly handle the case $\rm\:a = b.\:$ Either you need to assume $\rm\:a\ne b\:$ or else account for multiplicity when you say "just two solutions". – Bill Dubuque Mar 12 '12 at 19:47 @Bill: Of course I’m accounting for multiplicity. – Brian M. Scott Mar 12 '12 at 19:54 I don't see any such accounting. – Bill Dubuque Mar 12 '12 at 19:56 @Bill: It’s implicit in what I wrote that I’m counting the roots by multiplicity. – Brian M. Scott Mar 12 '12 at 20:03 We describe how late Babylonian pupils (circa $500$ BCE) were shown how to solve the system of equations $a+b=S$, $ab=P$. We use the computer screen instead of incisions on a clay tablet, so our solution will be much shorter-lived than theirs. To make the derivation more familiar, we use algebraic notation that came a couple of thousand years later. But we give a correct description of the algorithm that was taught. We have $(a+b)^2=S^2$. Subtract $4ab$. We get that $(a-b)^2=S^2-4P$. Take the square root(s). We get $a-b=\pm \sqrt{S^2-4P}$ (but there were no negative numbers back then either, those were the good old days). Now we know $a+b$ and $a-b$. Add and subtract, divide by $2$, to find $a$ and $b$. So we have solved a "quadratic equation" without using the quadratic formula, indeed without writing down the equation. You will recognize the procedure as a slightly hidden completing the square. Which reminds me, for al-Khwarizmi and many years after that, completing the square meant completing a geometric figure made up of a square with a square bite taken out of a corner to a real geometric square. - Interesting algorithm indeed. – Skydreamer Mar 13 '12 at 15:49 Both Brian M. Scott's and Andre Nicolas's answers are cool, but you can also consider the Vieta's formulae which for second degree polynomial $\alpha x^2 + \beta x + \gamma = 0$ are: \begin{align*} x_1 + x_2 &= -\frac{\beta}{\alpha} &&& x_1 * x_2 &= \frac{\gamma}{\alpha} \end{align*} where $x_1$ and $x_2$ are the roots of that polynomial. However, that is just like your equations, where $\alpha = 1$, $\beta = -S$ and $\gamma = P$! Have fun! - That was the proof I was trying to find myself. Thank you ! – Skydreamer Mar 13 '12 at 15:49
2016-07-28 10:46:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9826636910438538, "perplexity": 393.2657460994246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828010.65/warc/CC-MAIN-20160723071028-00018-ip-10-185-27-174.ec2.internal.warc.gz"}
https://destevez.net/2022/10/grcon22-capture-the-flag/
# GRCon22 Capture The Flag I have spent a great week attending GRCon22 remotely. Besides trying to follow all the talks as usual, I have been participating in the Capture The Flag. I had sent a few challenges for the CTF, and I wanted to have some fun and see what challenges other people had sent. I ended up in 3rd position. In this post I’ll give a walkthrough of the challenges I submitted, and of my solution to some of the other challenges. The material I am presenting here is now in the grcon22-ctf Github repository. ### Allen Telescope Array challenge track It was in fact the suggestion of Derek Kozel to the ATA people of sending some radioastronomy or SETI related challenges to the CTF what prompted me to make a few challenges. In the end I was the only one who submitted ATA challenges. Hopefully next GRCon some more people will send ATA challenges. #### Filterbank painting and GUPPI painting: solutions These two challenges were intended as easy challenges that encouraged people from the wider SDR community to gain some familiarity with SETI tools and file formats. For Filterbank painting, a HDF5 filterbank file called ATA_59839_72460_3C286.rawspec.0000.h5 was provided with the only instructions “See what you can make of the attached file”. The filename is a realistically looking name for an observation of the quasar 3C286 at the timestamp MJD 59839.72460, though I think that the files produced by the ATA pipeline are not named exactly like this. It included the name of the tool rawspec, because in fact it was generated by rawspec, and that’s how this tool names its output files. The name of the challenge “Filterbank painting” was intended to suggest spectrum painting. Indeed, the flag was spectrum painted on the filterbank file, so the only thing needed to solve the challenge was to view the waterfall. This can be done with watutil from blimpy. I guess that even for people which were completely unfamiliar with this, it wasn’t too difficult to find some documentation online, such as this tutorial from Breakthrough Listen. Running $watutil -p w ATA_59839_72460_3C286.rawspec.0000.h5 we get the following waterfall. Blame watutil for putting the beginning of the time data at the bottom of the plot. Anyway, it’s possible to read the flag already, or we can simply mirror the image vertically. Moreover, watutil prints out the following metadata information --- File Info --- DIMENSION_LABELS : ['time' 'feed_id' 'frequency'] az_start : 0.0 data_type : 1 fch1 : 8383.75 MHz foff : 0.03125 MHz ibeam : -1 machine_id : 20 nbeams : 1 nbits : 32 nchans : 1024 nfpc : 16 nifs : 1 rawdatafile : ATA_59839_72460_3C286.0000.raw source_name : 3C286 src_dej : 30:30:32.96 src_raj : 13:31:08.2881 telescope_id : 9 tsamp : 0.001024 tstart (ISOT) : 2022-09-17T17:23:26.000 tstart (MJD) : 59839.72460648148 za_start : 0.0 Num ints in file : 2048 File shape : (2048, 1, 1024) --- Selection Info --- Data selection shape : (2048, 1, 1024) Minimum freq (MHz) : 8383.75 Maximum freq (MHz) : 8415.71875 Though not important for the challenge, I took the effort of making the ATA challenges look realistic. Here I put in the metadata for a hypothetical obervation of 3C286 at 8.4 GHz that would have happened at the time I was preparing this challenge. An alternative way of solving this challenge without knowing anything about filterbank files or blimpy is just using the fact that this is a HDF5 file (the UNIX file command will recognize it as such, for instance). Thus, we can investigate the file using h5py. In [1]: import h5py In [2]: f = h5py.File('ATA_59839_72460_3C286.rawspec.0000.h5') In [3]: f.keys() Out[3]: <KeysViewHDF5 ['data']> In [4]: f['data'] Out[4]: <HDF5 dataset "data": shape (2048, 1, 1024), type "<f4"> In [5]: import matplotlib.pyplot as plt In [6]: plt.imshow(f['data'][:, 0, :]); plt.show() This would also show the waterfall (and in the correct orientation). Once the “Filterbank painting” challenge was solved, the “GUPPI painting” challenge appeared. This was basically the same as filterbank painting, except that a GUPPI raw file called ATA_59839_73822_3C286.0000.raw was given instead of the filterbank file. The file had the characteristics as the GUPPI file that I used to produce the HDF5 filterbank file from the previous challenge, which was supposed to be a stepping stone to this one. The way I thought that this challenge would be solved was to use rawspec to produce a new filterbank file, and then plot the waterfall as in the previous challenge. To do this, it is necessary to guess some more or less correct FFT size and averaging, or otherwise the waterfall would look awful. The filterbank file from the previous challenge could provide a reference, but I think that trial and error was also a good and quick approach. Unfortunately, rawspec can only run using CUDA, so it needs an NVIDIA GPU. I only realized this during the CTF when some people asked me about it. I’m sorry for this, since it makes the work more difficult to people without access to the appropriate hardware (specially taking into account that the in-person attendees would most likely be carrying laptops without an NVIDIA GPU). I don’t know of a tool that is like rawspec but using the CPU. I think this might not exist, which is unfortunate. On Thursday, which was the last day of the CTF, Lugi Cruz and Sebastian Obernberger, who are interns at the ATA, presented a talk called “Using Allen Telescope Array Data on GNU Radio“. In this talk, they presented gr-teleskop, which has a block to read GUPPI files in GNU Radio as if they were a regular IQ file. I knew that they were working on this topic (specifically, in inverting the polyphase filter bank that is used when generating GUPPI files), but I didn’t know that they had something ready to use. This was an interesting find, because it provided a new, CPU-only, way of solving the “GUPPI filterbank” challenge. Simply, read the GUPPI file in GNU Radio with this block, and then do whatever you want to get a waterfall. Unfortunately, I’ve heard from some people that tried it that gr-teleskop crashed with the GUPPI file from the challenge. This file doesn’t have the same format as the ones generated by the ATA beamformer pipeline (different number of channels and int8 versus float16 samples), but I hear that gr-teleskope should support GUPPI files from different telescopes, so I’ll be taking a look at this more in detail and send an issue or pull request once I see exactly what happens. An alternative way of solving the challenge was basically doing the same thing as rawspec by hand. It’s possible to read the GUPPI file either with blimpy or by hand (since the block structure is not that complicated), and then to do the FFT processing by hand on each coarse channel to produce a waterfall (if the GUPPI file samples are extracted to a 2D numpy array, this can be done in one go with np.fft.fft). However, this was too much work for what I intended to be an easy challenge. The challenge could be solved by running $ rawspec -j -f 16 -t 32 ATA_59839_73822_3C286 #### SETI: discussion This is my favourite challenge, because I put so much thinking in preparing it and I think it is really cool. I took care to fill it with many little details that probably no one would notice and that don’t really matter for the challenge. The challenge is intended as tribute to the novel Contact, by Carl Sagan (and also the film). This book/film occupies a very special place in the SETI and radioastronomy communities. I’ve heard from several people that this was one of the things that moved them to study astronomy, and the character of Ellie Arroway is said to be inspired by real-life Jill Tarter, who is one of the most important persons in SETI. An important portion of the book revolves around the analysis of an RF signal of extraterrestrial origin. Therefore, I thought that the topic would be extremely fitting for an ATA track at the GRCon CTF. Alas we can’t play the same trick next year. While preparing the challenge, I reviewed the book’s plot summary in Wikipedia and some sections of the book itself. I intended to generate a signal as accurate as possible to what the book describes. My thinking was that someone who remembered the book/film could use the plot as a guide for the challenge, and people unfamiliar with it could read up and find the information. I made a few changes and took some liberties that I saw fitted the challenge better. While Sagan describes many details about the signal in the book, I think he never needed to put everything together and actually generate the signal, so not all of what he says necessarily makes sense (more on this below). Also, since this is a CTF challenge, there are some constraints regarding difficulty, file size, etc. The part of the novel that is relevant to the challenge goes more or less as follows. Scientists participating in a SETI search discover a signal coming from Vega. The signal frequency is 9.24176684 GHz, and its bandwidth is ~430 Hz. It is ASK modulated, with power spectral densities of 174 and 179 Jy. The modulation encodes the sequence of prime numbers in binary. Afterwards they discover the same kind of signal being broadcast from Vega in many frequencies along the RF spectrum. At some point, the scientists notice that the signal uses polarization modulation. It is described to change between left and right handed circular polarization, thus encoding another message. In fact, when the book first describes the signal, it states that it is linearly polarized, which is another of its noteworthy features, because most signals of astronomical origin are unpolarized or only slightly polarized (with polarization degrees of up to ~10%). The strict reading of this is that the signal is elliptically polarized, with a large axial ratio (so at first glance it looks like linear polarization), and that the sense of the elliptical polarization keeps changing between right-handed and left-handed. The background here is that for some reason (which may become more clear below) our civilization does not use polarization modulation to encode data (but we use polarization to separate two waveforms, doubling the capacity of an RF link, or to give propagation diversity). However, an alien civilization may as well use polarization modulation, as it’s just another property of the RF wave that can be manipulated to convey information. Thus, the presence polarization modulation wasn’t obvious immediately. The polarization modulation sends a binary sequence (the two symbols are right-handed polarization and left-handed polarization), and the scientists see that it repeats with a period on the order of $$10^{10}$$ that is the product of three distinct prime numbers. They decide to arrange the sequence as a 3D array (there are only 3! = 6 possible ways to do this, given that the length is a product of three distinct primes), and interpret it as video. Note that this must be black-and-white video (not grayscale!), in the sense that pixels are either black or white. An “gray interpolation filter” is mentioned, which makes a lot of sense if the video uses dithering. There is also an audio track to go with the video on an “auxiliary sideband channel”, but no more details of this are given. The video turns out to be Adolf Hitler opening the 1936 Olympic Games. The plot then continues explaining that this was the first powerful enough radio transmission to reach nearby stars, and the aliens are just playing it back to us as a sort of “we’ve received you” message. The signal happens to be received at Earth after the round-trip light-time to Vega (50 years) has passed after the Olympic Games. Upon further analysis, the signal is revealed to contain detailed plans for an advanced machine, and so on, but here ends the segment of the book on which my challenge is based. The hardest part of setting up this challenge was making sense of this polarization modulation thing. It may work very well as a device for the novel plot, but it also needs to work well when you do try to deploy it as a real RF communications technique, as well as be an interesting trick for the CTF challenge (where the main goal was to make people read up a bit more about polarization, since most people are not too familiar with it). So, polarization modulation is a weird concept. Let us toy with the idea of changing the polarization of a CW carrier between two polarization states to transmit bits. The first remark is that in general we are going to see the polarization changes as amplitude changes in our receiver. For example, we could have a horizontally polarized receiver, and the signal changing between horizontal and vertical polarizations. To our receiver, the signal will appear when it is horizontal, and disappear when it is vertical. If we have a dual polarization (horizontal and vertical) receiver, as is common in radio astronomy, the signal will pop up in one polarization channel, and then in the other. This is no fun for the CTF, because if polarization changes appear as amplitude changes, then people can completely skip dealing with the polarization and treat the signal as ASK. Luckily, there are some choices of polarizations for which we do not see amplitude changes. These depend on the polarization of the receiver. The ATA feeds are linearly polarized, with channels called X and Y, which are horizontal and vertical (or the other way around; I never remember). For this kind of receiver, there are two choices of two orthogonal polarizations for which the X and Y channels see the same amplitude: • Diagonal linear polarizations, at +45 and -45 degrees. • Right-handed and left-handed circular polarization (RHCP and LHCP). I don’t like the idea of diagonal polarizations, because it looks like “wow, what a coincidence!”. In space there is no absolute frame of reference for what is vertical and what is horizontal, so the chances that the aliens are sending an RF signal that is polarized at a perfect 45 degree angel relative to our feeds are slim (specially taking into account parallactic angle rotation). On the other hand, circular polarization works very well, because it is not affected in this way by geometric rotations. This is the reason why circular polarization is very popular for space communications. Besides, the book says that the polarization modulation uses circular polarization. So let’s think that we want to make our CW carrier change between RHCP and LHCP. In our X and Y channels, circular polarization is manifested by the phase offset between X and Y: there is a phase offset of 90 degrees for RHCP and of -90 degrees for LHCP (or the other way around, depending on your handness conventions). So if we want the CW carrier to switch, say from RCHP to LHCP, the phase difference between the X and Y channels must change by 180 degrees. To do so, we either maintain the phase of the signal in one of the channels and make the phase in the other channel jump by 180 degrees, or we make the phase of both channels jump simultaneously so that they end up changing their phase difference by 180 degrees. In any case, there must be a phase jump in at least one of the channels. So this is a problem for the CTF, because our polarization modulation appears as phase modulation. In fact, unless we randomize the phase jumps that need to happen, people will be able to read off the data from the phase modulation, defeating our whole scheme. The idea of randomizing the phase jumps is silly. No one would realistically do that as a communications scheme. I guess that in reality how the phase jumps look like depends on how the RHCP and LHCP polarizations are physically generated. But what needs to happen is that if we go RHCP, then LHCP, then RHCP, first the phases in the X and Y channels start at some values, then change to some other values for the LHCP, and then they must change back to the same initial values for the RCHP. In this case it’s perfectly possible to read the data as phase modulation in a single channel, despite the fact that the two constellation symbols will not be 180 degrees apart in general. So it appears that our idea to do polarization modulation by changing between RHCP and LHCP doesn’t work very well (the same problem with the phases happens if we try to use diagonal polarizations). Luckily we are saved by the concept of fractional polarization. It happens that a coherent waveform, such as our CW carrier, or basically any of the waveforms we use in RF communications, can only have a pure polarization (either linear, circular, or elliptical). However, an incoherent waveform, which is basically Gaussian noise, can be unpolarized, or have any degree of polarization between 0% and 100% (this indicates the fraction of the total power that is polarized). The polarization is measured using something called Stokes parameters, which essentially involve measuring average power on different polarization channels, such as X and Y. Therefore, for a noise-like signal, its polarization is not an instantaneous property, but something that must be measured over some time average. (I should point out that here I am using the terms coherent vs. incoherent rather loosely; the Wikipedia article about unpolarized and partially polarized light treats these topics in more detail). If we use a polarized noise-like signal, instead of being able to choose between two values, RHCP and LHCP, we have a continuum range: from purely RHCP on one extreme, passing through unpolarized in the middle, and finishing at purely LHCP. This is nice, because if we want to send images we can encode pixel brightness as an analog value, instead of only black or white. What is more important is that for this noise-like signal it is not possible to see the polarization changes as phase jumps, since the idea of measuring the phase of random noise doesn’t even make much sense. A pragmatic way to think about the trick we have found is that to disguise the phase changes that need to happen between the X and Y channels when the polarization changes is to make the phase of our signal change all the time. This makes our signal be random noise. Now we may ask, does this idea look like a reasonable communications method? Well, since spark gaps fell out of use, all our RF communications have been coherent (in the sense I’m using here; I’m completely aware of non-coherent demodulation methods), because it is perfectly possible to build RF oscillators that generate a clean, coherent, sinusoid. On the other hand, my understanding is that many kinds of optical communications happen with an incoherent light source (also known as broadband light source), because it’s not easy to build coherent optical oscillators. I don’t know much about optical communications, but I think that even lasers have a noticeable linewidth. For this reason, incoherent light sources are typically modulated in amplitude, since phase or frequency modulation don’t make sense for such a noise-like source. So the point I’m trying to get at is that for communications, coherent signals are more useful, but sometimes it is not possible to build clean enough oscillators that can support them. Now, all known natural astronomical sources emit incoherent radiation. Think of many electrons doing their own random things in a magnetic field (synchrotron radiation). Each of them gives a small, random contribution to the total waveform. Polarization in these astronomical sources (either linear or circular) arises because of the presence of magnetic fields. These natural sources radiate tremendous amounts of RF energy. They involve huge areas of space, large amounts of matter, or whatever. Now imagine some advanced civilization that has some ability to create or control these kinds of natural astronomical sources, and to manipulate their magnetic fields to vary the polarization. Then it might make sense for them to use these kinds of sources as interstellar communications beacons, simply because they can get out so much more power than with clean electronic oscillators and amplifiers (and the power comes off omnidirectionally, which may be useful). For this kind of sources the only possible modulation methods that come to mind are amplitude and polarization, and I don’t know how they could achieve amplitude modulation, so perhaps polarization modulation seems the most reasonable option. There are surely many loose ends with this reasoning, but anyway, this is a CTF, not a SETI research paper. Hopefully I’ve managed to convince that my idea of a noise signal with polarization modulation is not too far fetched as a communications mechanism. With the idea of doing polarization modulation on a noise signal set in mind, I then decided to encode a single image as analog slow-scan television. I find that the idea of encoding an image as a sequence whose length is a product of two primes is not very good, neither for a CTF nor for active SETI (though this was used for the Arecibo message). First, the idea of the receiver determining the length of the sequence because they observe the same sequence repeating is not too robust to corruption by noise. Second, because in practice the receiver will typically attempt to compute the autocorrelation of the received data and look for peaks. With this, the receiver will certainly find the period of the sequence, but they will also find the correlation peaks given by the line length (see the solution to the challenge below). Including a nice blanking interval in the analog image signal will ensure strong autocorrelation peaks that the receiver will detect and become interested in. Certainly, one of the things I try first when reverse engineering signals (even streams of digital symbols) is to compute and plot the autocorrelation, and to represent the data as a matrix with a line length corresponding to the autocorrelation peak lags. This is a much more fruitful and robust technique than trying to notice that the length of some piece of data is some particular number. Interestingly, one of the challenges of the CTF, “Solve my bomb; Part 1: Bounty” was done like Arecibo message. There was a hint about images being sent line by line, and the image was sent as 0’s and 1’s in FSK using a semiprime length. When I got the hint, I immediately thought slow-scan television. I took the FSK signal as analog FM, found the line length via autocorrelation, and plotted the image. I never noticed that it was digital rather than analog, that there was a symbol rate, and that the message was some particular size. Finally, I decided to send a single image rather than video because a single image was enough to contain the flag, easier to decode, and produced a smaller file. Indeed, there is no way I could have included in a reasonable file size the $$10^{10}$$ bits of information they state in the book. This is another implausible aspect of the book, because if you think about it, binary polarization modulation is not magic and still has a spectral density of 1 bit/Hz, so with a waveform that is 400 Hz wide it would take 289 days to send $$10^{10}$$ bits (on the other hand, this $$10^{10}$$ value corresponds to 424 seconds of 1024×768 black and white (not grayscale) raw video at 30 fps, so it’s a reasonable value for the novel). #### SETI: solution Certainly I didn’t expect the CTF participants to go through all the thinking I have described above. I intended people to get the references to Contact, think of looking at the circular polarization, discover that there is actually a signal there, think again of the book/film, and try to get an image or video out of there. For this, the main difficulty was plotting the data with a line length that is not too far off from the correct one. Once you do that you can already see some geometric patterns and refine the line length by hand. An autocorrelation of the polarization signal was the easiest way to find the line length. I think that maybe it’s possible to look at the time domain polarization signal and discover that there is some periodicity, and thus arrive at the correct line length (specially because of the large blanking interval I put and because the image is not too busy, and the background is monochrome). However, to do this it is important to average the polarization with the correct length. Too short of a length, and it will be too noisy to notice anything (recall that the polarization of a noise signal is only defined as a time average). Too long of a length and you’ll average multiple lines together. The challenge description said: Last Sunday we received an unusual signal during a routine observation with the Allen Telescope Array 20-antenna beamformer. Experts believe that the signal originates from outside the solar system. The “last Sunday” part was intended to give a sense of urgency and novelty. The way this story is told is that the ATA scientists have found an extraterrestrial signal on Sunday, but since they’re not able to understand it completely, and coincidentally GRCon starts next day, they’re dropping the recording as a CTF challenge. Surely one of the participants will manage to solve the mystery (unfortunately no one did). Two SigMF files, ATA_59848_08825_HIP91262_polX and ATA_59848_08825_HIP91262_polY were given. The SigMF description says “HIP91262 ATA 20-antenna beamformer polarization X (GRCon22 CTF)”, and gives a timestamp of 2022-09-26T02:07:05Z and a frequency of 5.681361 GHz (below I’ll explain why I decided this not to be 9.2 GHz as in the book). The sample rate is 6 ksps. Note that the filename has nearly the same invented, but realistic looking, format that I used for the filterbank and GUPPI painting challenges. I don’t expect anyone to know what HIP91262 is, but a quick Google search shows it’s the star Vega. Since we are given files for polarization X and Y, it might already be suspicious that the polarization has something to do with the challenge. If we look at the waterfall we immediately notice some pulses in the signal, and a break in the pulses. This is the waterfall for polarization X, but polarization Y looks exactly the same. The file is quite long (1 hour), so it is reasonable to continue looking and move in time. We note that the signal Doppler drifts down, which is fair, because a signal from Vega should drift down because of Earth rotation Doppler. We quickly arrive at a part where we can see 2 pulses, then 3 pulses, then 5, then 7. If we count the next pulse trains we see 11 pulses, 13 pulses, etc. These are the prime numbers. Here I have deviated from the book and encoded the numbers in unary base rather than binary base. I think this makes way easier to see that these are prime numbers. The problem with binary is that you need a way to separate the numbers, and there is also the issue of endianness. Unary really stands out on its own. I think that counting (pulses in this case) is a much more basic ability than binary encoding, so maybe the aliens in the book should have also used unary. I have also increased the relative power difference between the pulse and no-pulse power levels from what the book says. The difference between 174 and 179 Jy is only 0.12 dB, so it’s quite hard to notice. I wanted the pulses to stand out easily. So at this point we have a series of prime numbers coming from Vega, so I hoped the reference to Contact to come to mind or pop up from a Google search (later on during the CTF, since no one had solved the challenge yet, I just dropped the reference to Contact on the CTF chat as a hint to try to help people). Once it is clear that the key is in the polarization, the idea to look at the circular polarization in particular comes either from the book or from the obervation that since the two files are linear polarization (X and Y), and they look identical in the waterfall, we’re already looking at the linear polarization and see nothing relevant there. There are basically two ways of measuring the circular polarization (i.e., the Stokes V parameter) using GNU Radio if we are given signals in the X and Y polarizations. The first involves converting to RHCP and LHCP as $$R = X + iY$$, $$L = X – iY$$, and then computing the difference in power of these. The second involves computing $$\operatorname{Im}(X \overline{Y})$$. If you work out the math you’ll see that these two ways give exactly the same result, except perhaps for the sign (I’m completely ignoring signs and handness conventions, since we don’t care about them to get a video signal). If we do any of these and plot the Stokes V in the time domain, we already see that there is something going on, but the signal is too noisy. It is reasonable to try a lowpass filter. Below I show the unfiltered signal in blue, and the signal filtered through a 100 Hz lowpass in red. In doing this, I’m being a bit lazy and going for the most direct way to the solution, so I’m processing the full 6 kHz bandwidth, which includes the signal and the noise. This is not the best idea. Probably it pays off to perform Doppler correction and lowpass filter the signal before doing anything. An easy way to do Doppler correction is with my Doppler correction block. The Doppler is almost linear versus time, so we can measure in the waterfall the Doppler at the beginning (~1250 Hz) and end (~-200 Hz) of the IQ file and then write a Doppler file that lists these: 0 1250 3600 -200 If we do this, probably we will already get something in the Stokes V that looks like the red trace above, because we would be applying a lowpass filter with a cutoff somewhere between 200 Hz to 300 Hz (the signal bandwidth is around 400 Hz, as in the book). The following GNU Radio flowgraph shows the two ways of computing the Stokes V and the output to files. There is also a part on the bottom to which we will come back later. We can investigate the time domain Stokes V signal using Python. I prefer that to using GNU Radio, since GNU Radio doesn’t provide good tools to plot the autocorrelation, select peaks, etc. I will use the unfiltered signal (the one in blue in the plot above), just to show that it isn’t necessary to apply filtering to solve the challenge. One of the beauties of analog video is that even if it’s very noisy, we are able to recognize features in the image. The Jupyter notebook for this part can be found here. If we plot the autocorrelation we get the following. Since the autocorrelation of any signal is very strong at a lag of zero, and the autocorrelation is symmetric, it is reasonable to plot only the right half of this plot, omitting lags very close to zero, so that the vertical scale lets us see the peaks at non-zero lags better. This is shown below. We see that there are a few peaks. In particular, a noticeable peak is near the right part of the plot, at a lag of around 1.8e7 samples. This in fact corresponds to the total length of the image. It takes somewhat less than 1 hour (which is the length of the IQ file) to transmit the image, so part of the image repeats, and this is what this peak is showing. If we think for a second about what we want to find, which is the line length of the video transmission, none of the peaks that we see in the plot above are relevant. The video image should have hundreds or thousands of lines, so the autocorrelation peak given by the line length should be at a 1/100 or 1/1000 of the total file length. The peaks above are at much larger fractions of the file length, and so must correspond to larger features in the image that encompass many lines. To find the line length we need to look at the lags close to zero. We do so, obtaining the plot below. There is a noticeable peak at ~37000 samples. A peak at twice this lag is also present, and there are smaller peaks that seem to be at multiples of 1/4 of this value. We decide to go for the peak at ~37000 samples. We find the exact location of the maximum of this peak and take that as the line length. We reshape the Stokes V time domain data as a matrix using this line length and plot it. The image is shown below. The contrast is not great, because it is chosen automatically, but we can already see a vertical bar, some geometric features, and what looks like text on the right. The vertical bar is slightly slanted, so we tweak manually the line length to make it straight. This vertical bar corresponds to the blanking interval that I put in the video signal. The nice thing about this approach is that it is tolerant to mistakes in the line length. For instance, if we had used twice the correct line length, we would get a doubled image, and we would notice and halve the line length. If we had used half the correct line length, then we would still see the image, but would get a different thing on even and odd lines, such as when video interlacing is messed up. We could then try doubling the line length. Values which are close to the correct ones give slanted images, and can also be corrected by hand. However, values that are too far off give a complete mess with no clues of how to fix it. After tweaking the line length by hand (only by 3 samples) to make the vertical bar straight, rotating the image, changing the aspect, increasing the contrast, and circularly shifting the matrix to put the text in the middle (and the bar at the top and bottom), we get the following. We can clearly see the flag, and the GNU Radio logo. The reason why we have had to rotate the image is because I decided make the video lines of this image be scanned vertically rather than horizontally. All the video systems that I know of scan horizontally, so I decided to include this twist for fun (it is very easy to sort out for anyone that manages to see the image). Incidentally, the book mentions “And try rotating it about ninety degrees counterclockwise” when discussing how to display the video. An existing system that scans vertically is Hellschreiber, and in some sense this video transmission is similar to Hellschreiber, because it is sending an image which is a line of text, so its width is much larger than its height. With this, the challenge is solved, but there remains a question: what is the chequerboard pattern that appears on the image? This is actually caused by the ASK pulses that encode the prime numbers. Since we are plotting the power of the signal that is circularly polarized, power changes caused by the ASK modulation are registered in the image (the ASK modulation is applied to the partially polarized noise signal as a whole, increasing its total power and not changing the polarization degree). As we will see in the making-of below, there are exactly 8 ASK symbols (either pulse or no-pulse) in the time it takes to send a line of the image. Thus, the pulses on adjacent lines get lined up in the image and appear as rectangles. The gaps between the primes consist of an even number of no-pulse symbols (whereas between pulses in the same prime we only have gaps of one no-pulse symbol). Thus, every time that the signal moves on to a new prime, the pattern of pulse/no-pulse gets offset by one symbol, and this is what causes the chequerboard pattern. The width of the chequerboard rectangles is thus proportional to the number of pulses that are in the prime number, so we see that the width increases as the signal keeps sending larger and larger primes, and then goes back to very narrow when the signal starts again with 2, 3, 5, 7, 11, etc. I could have prevented the chequerboard pattern from showing up in the image simply by making the extra power of the ASK pulses be unpolarized, so that the Stokes V parameter is not affected by the presence of the pulses. However, I decided that it was more natural to have a signal where the ASK modulation affects the polarized signal as a whole, rather than adding more unpolarized power, and that perhaps the chequerboard pattern could help someone find the correct line length by providing another simple geometric pattern on the image. The way to remove the chequerboard pattern is to normalize the Stokes V by dividing by the total power of the signal (which is called Stokes I). Since the signal has the same power in the X and Y channels, we can simply measure the power in the X channel. I have included the power measurement in the bottom of the GNU Radio flowgraph. To do this, the Doppler of the signal is corrected, a lowpass filter with a cutoff frequency of 300 Hz is a applied, and signal plus noise power is measured. The same is done on another part of the spectrum to measure noise power alone and subtract. The power measurement is averaged with a lowpass filter with a cutoff frequency of 5 Hz. The beginning of the signal power is shown here. We can see the ASK pulses, even though the power measurement is somewhat noisy. The power has already been normalized so that the average power is one. We can now divide the Stokes V measurement by the power measurement. To do so, it is important to take into account the group delays of the lowpass filters that we have used everywhere, as otherwise the measurements won’t line up correctly. This gives the following image, where the chequerboard patter has disappeared. If instead of using the unfiltered Stokes V measurement we start off with the filtered Stokes V mesurement, we get the image below. We see that the difference in image quality is very small, so averaging the Stokes V is not really necessary. The main reason is that in any case matplotlib is doing some averaging to rescale and display the image, as it is more than 35000 pixels (samples) high. #### SETI: making-of Since I tried to make my Allen Telescope Array challenges as realistic as possible, I chose a timestamp for the observation when Vega was high up in the sky at the telescope. I intended to calculate and include the correct Doppler shift for Vega, and I wanted the Doppler drift to be clearly visible in the waterfall. For this, the signal should be narrower than the total Doppler drift along the IQ file. The Earth rotation Doppler drift is maximum when the star is higher up in the sky, and it is also reasonable to conduct observations at this time to reduce atmospheric noise and losses. I calculated that for microwave frequencies the Doppler would change on the order of 1 kHz in one hour, so having a signal that was a few hundred Hz wide as in the book was good to make the Doppler drift easily visible. Since I needed to generate many frequencies to construct the waveform (the carrier frequency, the pixel duration, the line length, the ASK pulse duration, etc.), I decided to make everything a suitable power of two multiplied by the HI line frequency of 1420.406 MHz. This was intended just as a subtle reference to the importance of this frequency in radioastronomy. It was unimportant for the challenge, and I doubted that anyone would notice, specially because of Doppler. I would apply Doppler to all of the waveform features, not only the carrier frequency, just as it would happen in real life. All the signal generation is done in this Jupyer notebook. I used Astropy to calculate the Doppler from Vega using the heliocentric radial velocity correction. This accounts for Earth’s rotation and motion around the Sun, but not for the relative motion between the Sun and Vega. Thus, this Doppler is what we would see if the aliens are Doppler correcting their transmissions for a zero Doppler reception at the Sun. If they are beaming towards the solar system, that looks like a reasonable thing for them to do. The radial velocity comes out to 13.8 km/s, which means that all the frequencies involved in the waveform will change by about 46 ppm. The bulk of this velocity is due to the Earth motion around the Sun. On top of that we have the Earth rotation Doppler, which changes noticeably throughout the course of the observation. Following my idea of making all the frequencies a power of two times the HI line frequency, the best choice for the carrier frequency was 4 times the HI line frequency, which gives nearly 5681.623 MHz. This is good, because it is a reasonable frequency for the ATA to observe at, and is also relatively high, giving more Doppler drift in the waterfall. If we had chosen 8 times the HI line frequency, we would have obtained 11363 MHz, which is a bit high for the ATA. The ATA feeds do cover this frequency, but observations are usually conducted at lower frequencies, and also 11.3 GHz is full of RFI coming from satellite TV GEO satellites. This is the reason why the signal for this challenge sits at 5.7 GHz rather than 9.2 GHz as in the book. I felt that this was a small liberty I could take, specially because the book says that copies of the signal were broadcast all over the spectrum. The only special thing about the 9.2 GHz signal is that it’s the one that they found first. The Doppler at 5.7 GHz is quite large, on the order of -260 kHz. Since I wanted to make the sampling rate of the IQ file small in order to reduce its size, I chose the centre frequency accordingly. A frequency of 5681.361 MHz centres the received signal more or less well at baseband, so this is the centre frequency that I chose for the IQ file. Here we see the Doppler frequency at baseband, once we take the centre frequency into account. This is what appears on the IQ file. The Doppler is applied to the remaining features of the waveform using the concept of time of transmission offset. Since according to the Doppler Vega is receding from Earth, the time of flight of the signal increases with time. We can express this as the offset in the time of transmission that we see in the signal at each reception time, compared to the time of transmission that a signal with no Doppler would have had. This is just the integral of the radial velocity in units of time. In order to keep the file size reasonable small, I decided to make the IQ recording one hour long. We need to be able to fit a whole image in that duration. The signal bandwidth is going to be some 400 Hz, as in the book, but as I mentioned, the measurement of Stokes parameters needs some averaging. Therefore, for best results the pixel clock should be much smaller than the signal bandwidth. This means that the image will be sent pretty slowly. I decided a pixel frequency of $$2^{-26}$$ times the HI line frequency, which gives 21.17 Hz. At this rate, a 256 x 256 image (a reasonable minimum size to draw a flag as text) would be sent in 0.86 hours. I prepared the following image in GIMP. I decided to make it 512 x 128 rather than 256 x 256, as that aspect ratio would be more convenient for writing the flag as text. Grayscale values will be encoded with different circular polarization values. Pure white is pure RHCP and pure black is pure LHCP (or the other way around). I also put in a thick bar of 50% grey (corresponding to unpolarized) to be used as blanking interval, in order to simplify finding the appropriate line duration. As I mentioned above, the image is scanned using vertical lines rather than horizontal lines by the slow-scan TV signal. The noise signal encoding the image is constructed in the following way. Each pixel will be formed by 64 “symbols” which are uncorrelated normal random variables. Thus, these symbols give white Gaussian noise at 1354.6 sps. A filter will be applied to this to reduce the bandwidth of the noise to approximately 400 Hz and to obtain the final sample rate of 6 ksps. Symbols are generated for the RHCP and LHCP polarizations, using the standard deviations given by the pixel brightnesses to achieve the polarization we want. Then we convert from the RHCP and LHCP basis to the X and Y basis, and work with the X and Y channels from this point on. Now we apply the ASK modulation which contains the prime numbers. I decided that a pulse frequency on the order of 1 Hz was easily visible in the waterfall for a signal of this bandwidth. It was also easy to hear in case someone tried to listen to the signal. The exact pulse frequency is constructed such that each pulse lasts 1024 of the “Gaussian noise symbols” that we computed. Therefore, a pulse lasts 16 image pixels, so there are 8 pulses per image line. The sequence of primes consists of only the primes 2, 3, 5, …, 61. In the book the signal is described to count up to very large primes, but I decided to make the sequence relatively short, so that it repeats multiple times times during the recording. My hope was to make clear, and not so difficult to see, that these are the prime numbers, but that there is no message hidden in them for the CTF challenge. I decided to make the shape of the power spectral density of the signal be a Gaussian, because this looks more “natural” than a root-raised cosine or something like that. To achieve this, I built a polyphase arbitrary resampler using a Gaussian as prototype filter and ran the “Gaussian noise symbols” through it. The parameters of the polyphase resampler (number of arms and taps) were chosen to make the spurious products small enough. When running the polyphase resampler, we also take into account the time of transmission offset that we calculated previously. In this way, we apply the Doppler to all the signal features, such as the pixel clock, pulse duration, etc. Strictly speaking, we aren’t applying the Doppler to the signal bandwidth. The polyphase resampler maintains the same output power spectral density, while in reality the signal bandwidth should decrease slightly with time, because the Doppler is decreasing. This effect is very small, however, so I didn’t think of including it. On the other hand, Doppler really affects the slow-scan TV image. If we used the nominal line duration to try to assemble the image, we would get a slanted image, because Doppler has made the line duration longer. However, since the line duration is a weird made up value, I didn’t intend anyone to think about it. While solving the challenge we effectively find the effective line duration, with the Doppler already applied. Interestingly, due to Doppler drift there is also a small curvature in the image. The effect is very small to notice easily, however. When performing the resampling, a more or less random time offset is also added. This is used to make the IQ file start somewhere in the middle of the transmission of the image and avoid the unrealistic coincidence of having the image start exactly when the observation began. As it happens, the recording started while the second to last character of the flag was being sent, as we can see in the solution. After running the signal through the polyphase resampler, Gaussian noise is added to set the SNR. I adjusted this by hand to make the signal have a moderate SNR. Finally, the carrier frequency Doppler is applied and the IQ data is saved as a SigMF dataset. As I wanted to erase all traces that I was preparing these files well in advance rather than actually observing with the ATA “last Sunday”, I even removed the file timestamps in the tar file in which I put the two SigMF datasets. I learned how to do this in a StackOverflow answer. ### OFDM 101 The other challenge that I submitted was called OFDM 101. This was included in the “Miscellaneous” category. I have been working a lot with LTE this year, and I think that many people have never dealt with OFDM, and there are perhaps fewer good resources to learn about it than for other modulations, making it seem more difficult than it really is. Thus, I thought it would be a good idea to make an OFDM challenge, to get more people interested in learning about OFDM. Perhaps my work on LTE could even provide some useful resources. Even though I classified this as a difficult challenge, because reverse-engineering a custom OFDM modulation is never going to be as easy as reverse-engineering something like FSK, I tried to make it as simple and straightforward as possible, hence the “101” in the name of the challenge. My idea was to make the data be ASCII text, which is easily recognizable. I used the text of the Wikipedia article on OFDM as filler text and inserted the flag somewhere in the middle. This had the goal of making the participants demodulate all the data rather than handling only the first few symbols and calling it a day, and providing some randomness for the data to prevent a weird looking waterfall. I think that if you get a bunch of text, the idea to grep for something containing flag{ is obvious. The OFDM modem was built around a 1024 point FFT, occupying only the central 3/4-ths of the carriers, and using a cyclic prefix of 1/8. The modulation was QPSK. I think these are rather standard values that won’t surprise anyone. The frame structure was formed by frames of 8 symbols, in which the first symbol was all pilots and the 7 remaining symbols carried data. ASCII data was encoded in the QPSK symbols in the most straightforward way: allocating two bits per subcarrier in order of increasing frequency. The pilot symbols were made by repeating the CCSDS 32-bit syncword 0x1ACFFC1D as many times as necessary to fill all the subcarriers in each of the symbols dedicated to all pilots. Since there are several ways of encoding pairs of bits as a QPSK symbol, I thought that it was appropriate to give the syncword as part of the challenge, to avoid any possible ambiguities in the QPSK encoding. Thus, the challenge description said I made a simple Jupyter notebook to generate the OFDM signal without any channel effects. Then I used a GNU Radio flowgraph to add the channel effects. I added AWGN, carrier frequency offset, sampling frequency offset, and dropped a bunch of samples from the beginning, to avoid the IQ file starting exactly at the beginning of the data. I also added a multipath channel with some taps that I chose by hand. The multipath channel didn’t change with time, but I guess this is somewhat realistic because the IQ file was rather short (only 95 milliseconds). In doing this, I wanted to avoid the textbook approach of “assume that the signal is synchronized”. I wanted all kinds of synchronization to be a part of the challenge. I didn’t want to make things unnecessarily difficult, so I took care that the SNR was high enough, that the multipath was not too crazy, etc. This is how the spectrum looked like. I made sure to leave at least around 20 dB SNR in every subcarrier. For the SigMF metadata, I decided to add some subtle references to LTE, but I didn’t intend to confuse anyone in thinking that this was really LTE. The sample rate for the file was 1.92 Msps, which is the sample rate used for an LTE carrier with 1.4 MHz bandwidth, and the centre frequency was 1810.75 MHz, which is somewhere in the middle of the downlink B3 band, which is a very popular LTE band. This challenge was solved successfully by orangerf. Congratulations! They told me that they had worked previously on reverse-engineering some other OFDM signals. I don’t have a full solution to show here, but at least I’ll give some indications. The frame structure was easy to see in the waterfall, for instance using inspectrum. The pilot symbols were easily seen by their pattern (in hindsight it was very good that I repeated a relatively short sequence to form the pilots), and the length of the symbols could be found by the keying clicks caused by symbol changes. Here I have used cursors to mark the 8 symbols in a frame by hand. Note that I have obtained a symbol duration of 600 us, which is exactly right. The useful symbol length (and hence the carrier separation) could be found by using the autocorrelation of the signal. Due to the cyclic prefix, there will be a strong autocorrelation peak at the lag given by the symbol length. An alternative way of finding the symbol length was deduction/guesswork. The symbol duration we have found is 1152 samples. This is somewhat larger than 1024 samples (in fact it’s 1024 samples times 9/8). If I am giving the IQ file in the same sample rate that this OFDM modem supposedly uses, then 1024 samples looks like a very reasonable FFT size, and hence the useful symbol duration should be that (recall that the FFT size of an OFDM modem is equal to the useful symbol duration in samples, and that for efficiency, it should be a power of 2 or a product of powers of small primes). Once we know these parameters, we can obtain a coarse time alignment from the waterfall and attempt to demodulate a symbol. Since I put a carrier offset frequency of several subcarriers, we are not done yet. The subcarrier separation is 1875 Hz, and the carrier frequency offset (still unknown to us) is 3728 Hz. At this point we only need to determine the carrier frequency offset modulo the subcarrier separation, so that the subcarriers of the signal are correctly aligned to FFT bins. Once we achieve that, we can count how many FFT bins are occupied by subcarriers, note that our frequency offset is off by a few subcarriers, and fix it if we want. The subcarrier frequency offset can be found by hand (after all, the only thing we need is that most of each FFT bin is occupied by a subcarrier, instead of having half of the bin occupied by one subcarrier and the other half by the adjacent subcarrier). Alternatively we can also use the variant of Schmidl & Cox that I mentioned in my post about the LTE uplink. The phase of the correlation peaks that we obtain with this method gives an estimate of the carrier frequency offset. After we do this, we have coarse estimates for the symbol time offset and the carrier frequency offset. We can demodulate a pilot symbol and see that the constellation is QPSK. If we look closely at this pilot symbol, we will see that it is formed by 48 repetitions of the same 16-symbol sequence. We were given a 32-bit syncword in the challenge description, so now we connect the dots and realize how the pilot symbols work. At this point we can estimate the channel using this pilot symbol. Since the channel doesn’t change with time (which is apparent in the waterfall from inspectrum), this estimate is valid for all the IQ file. Now we can attempt to demodulate and equalize all the symbols in the IQ file. When we do this we will be able to refine our carrier frequency offset and we will also note that there is a sample frequency offset and correct for it. This can be done as in my LTE posts (for instance, on the one about the uplink). Once we have demodulated the data symbols, I guess that it’s not too hard to realize that there is ASCII text, and putting all the demodulated data together, we can just grep for the flag. ### Never the same color This was a track of challenges made by Clayton Smith (argilo). I managed to solve all the challenges in this track, and since Clayton liked the way in which I solved it, he invited me to share my solution in the CTF walkthroughs that we did on Friday. Hence, I’m also including my solution to these challenges here. When I opened the IQ file with Inspectrum, I immediately realized that this was a video signal. Then I connected dots and said “oh, Never The Same Color”, remembering the saying about NTSC. There was also a reference to the back porch in the challenge description: While sitting on my back porch, I noticed a strange signal and made a SigMF recording. Can you make any sense of it? I didn’t know that much about NTSC, and learned a lot with these challenges, so this part of the Wikipedia page on NTSC helped me quickly get the information I wanted. I first wanted to look at the audio subcarrier, since that would be the easiest to handle. According to Wikipedia it is FM modulated and in the upper part of the spectrum. We can see it at ~2.75 MHz in the waterfall above. Thinking that there might be additional signals hidden in the audio subcarrier (this had already happened in the Signal identification challenge, where a broadcast FM signal had flags in the mono, stereo and RDS channels), I made a simple GNU Radio flowgraph to FM-demodulate the audio subcarrier and drop it to a file that I could use with Gqrx. Since Gqrx only handles IQ files, but the FM demodulator output is real, I made a complex signal by putting zeros in the imaginary part. In this way we get a symmetric spectrum, but all the information is there. I found a bunch of flags quickly by demodulating this file in Gqrx in USB and FM modes. Most of these flags were on additional audio subcarriers that are specified by the NTSC standard (second audio program and so on), and the flags said so, but I went so fast through this that I didn’t pay much attention. Once I was convinced that most likely there were no more flags in the audio, and missing 3 flags out of a total of 7, I set to work with the video. Rather than using SigDigger, SDRAngel, or some other ready-made software to decode the video, I thought of doing my own crappy decoder in Python. The reason was that I’ve sometimes heard that these applications can be a bit finicky regarding synchronization of the video signal, and that I thought that if there was something hidden in the video signal, my chances of finding it were better if I had to handle the signal by hand. This is the part of my solution which is interesting, because other participants used one of these applications (sometimes finding problems) or an actual TV to decode the video. What I did first is to demodulate the luma component by using a Frequency Xlating FIR Filter centred at -1.75 MHz (which is where the luma carrier was) with a lowpass filter having a cutoff of 3 MHz, followed by a Complex to Mag block. The filter was used to get rid of the chroma and audio subcarriers. I didn’t give any thoughts to how the fact that the luma is vestigial sideband should affect the demodulation, but this approach worked fine. On second thoughts, for best results it is necessary to handle the vestigial sideband by doing something clever. Also, my idea to centre the filter at the luma subcarrier was plain silly, since for AM demodulation it doesn’t matter that the signal at the output of the filter is centred. It would have been best to center the filter in the middle of the portion of the signal that we want to extract, which is asymmetric about the luma carrier. The GNU Radio flowgraph for this is shown here. At this point I didn’t have yet the Polyphase Arbitrary Resampler. I’ll come back to it later. I thought of maybe using GNU Radio to display the video, but the SDL Video Sink block was missing on my machine and the Time Raster Sink is not very good for this. Instead, I used a Jupyter notebook to arrange the time domain luma signal into images. The approach was similar to the one I have described for the SETI challenge, except that here the line length is known. A quick Google search shows that the NTSC line frequency is 15734.25 Hz. This means that at 8 Msps there are around 508.44495 samples per line. We can round this number to 508 samples and use this line length to plot the first field of the video. As shown below, the image is very slanted. Still there is a QR code which we can de-slant with GIMP and read. However, this is a rickroll. I had the idea to look at the rest of the video, since the image could change at any point. But to do this, it was a good idea to get rid of the slant first. Large slants like this happen specially when the line length in samples is short. We need to round it to the nearest integer, and the relative error made by this rounding is large. A simple trick is to interpolate the data to make the line length longer, so that the relative error caused by rounding decreases. A better trick, specially when we know the exact line length, is to use an arbitrary resampler with a resampling ratio close to one to make the line length be an integer number of samples. That is the purpose of the Polyphase Arbitrary Resampler block in the flowgraph above. It makes the line length equal to exactly 508 samples, rather than 508.44. Note that this approach only works if there is no sample rate offset. If we were dealing with a signal that involves real hardware, there will typically be a sample rate offset of a few ppm, but Clayton has been nice and has included no sample rate offset in this file. The Polyphase Arbitrary Resampler approach works even in the presence of a sample rate offset, but we need to estimate the offset (by measuring the slant on the images) to refine the resampling ratio. With real hardware the sample rate offset will change with time, so this approach breaks down. A more complicated demodulator that uses the horizontal blanking interval to perform synchronization is need. However, for a short recording this simple approach might be okay, even with real hardware. Once we do the resampling, we get perfect images with no slant. As part of these images we can see all the parts of the data which would normally be outside the screen are, which are called blanking intervals. The figures below shows the first field (even lines), the second field (odd lines), and the deinterlaced framed. Once I was happy with my code, I wrote a loop that wrote each frame to a PNG file with Pillow. Then, I looked through these images with feh. While quickly browsing the frames, I noticed that there were some horizontal lines white lines on the top of the image that changed every frame (we can see them in the figures above and below). Besides this, the image was static. However, when I arrived at frame 112, I noticed that the QR code changed. The QR code changed back to the usual on frame 113. The QR code from frame 112 contained a flag. This special QR code appeared in other frames further down the IQ file, but each time only in a single frame, to make it difficult to see or catch to those playing the video in real time. Missing two flags now, I decided to investigate the white lines above the image. I thought that maybe if joined these lines in all the frames together, I would get some kind of image, so I wrote some Python code to do that, and got the following. Here each frame corresponds to two columns of the image, and only the first 400 frames are shown. We note that there is some kind of even/odd pattern in the columns, so maybe we need to group even and odd fields together. We get the following. Still it was not clear to me what this is. It doesn’t seem like a crazy kind of barcode. Perhaps digital data of some sort, without scrambling, of course. Then I thought of teletext (I’m old enough to have used teletext as a child, but it was the PAL one). Up to this moment I was thinking of some custom way of embedding the flag just for this challenge, but maybe a standard method covered by NTSC was used. The teletext data must surely be sent somewhere in the video signal, and a quick search showed me that it occupies a portion of the vertical blanking interval, just like this data. However, all of what I read pointed to a much higher data rate than what I was seeing here. Then I came accross line 21 closed captions (also called EIA-608 or CEA-608). This was a way of sending closed captions digitally in the line 21 of the vertical blanking interval of NTSC. The data rate looked very much like what I was seeing. I got my hands on the standards document (you need to “buy” it to download it, but it’s actually free), and saw the following figure. It matched perfectly the data in the CTF file. I took the the first line of data and recovered the clock by hand, as shown here. Note that the clock run-in goes high and low in each bit duration, unlike the usual digital signals where we have a 101010 kind of signal. The same clock recovery was valid for all the fields. Then I sliced the bits, checked the parity just as a sanity check, and printed the data corresponding to the even fields, taking advantage of the fact that the character set used by EIA-608 is almost like ASCII. There was the rickroll URL again, but also the flag Part 6: flag[21-is-the-magic-number] I then did the same with the data in the odd fields. I could see something, but the characters were garbled. This is supposed to say “Part 7: flag[whatever]”: Cneg 7: synt[pp3-sbe-gur-jva] The numbers and symbols were readable, but the letters were not. I had seen that EIA-608 had character sets for Cyrillic, Chinese, etc., so I spent some time searching if there was a character set for the latin alphabet where the numbers and symbols were like ASCII but the letters were permuted. I even considered EBCDIC. At some point, rot13 dawned on me and I got the flag. ### Closing The CTF was really fun and enjoyable, and there were other very remarkable challenges that I haven’t covered here, such as the Dune track by muad’dib and the “SDRdle” by argilo (a wordle game played with APRS messages and spectrum painting). Thanks to all the people who submitted challenges and to all the participants.
2023-03-22 15:13:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7010226249694824, "perplexity": 616.2725094422748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943845.78/warc/CC-MAIN-20230322145537-20230322175537-00707.warc.gz"}
http://mathhelpforum.com/differential-geometry/131591-limit-points-subsequences.html
# Thread: Limit points of subsequences 1. ## Limit points of subsequences I have to give examples of four kinds of subsequences if they exist. S∈ℝ is the limit point of a subsequence of the sequence (Xn) (n=1 -> ∞) if ∃ subsequence (Xnk) (k=1 -> ∞) : Xnk -> S. What could the sequences be or do they exist if a) limit point of subsequence = {0} b) limit point of subsequence = {0,1} c) limit point of subsequence = {infinitely many points} d) limit point of subsequence = Q ∩ [0,1] 2. Originally Posted by antero I have to give examples of four kinds of subsequences if they exist. S∈ℝ is the limit point of a subsequence of the sequence (Xn) (n=1 -> ∞) if ∃ subsequence (Xnk) (k=1 -> ∞) : Xnk -> S. What could the sequences be or do they exist if a) limit point of subsequence = {0} Do you mean by this that the only subsequential is 0? Any sequence converging to 0 will do. b) limit point of subsequence = {0,1} Find a sequence, $\{a_n\}$, that converges to 0, a sequence, $\{b_n\}$, that converges to 1, and "interleave" them: $a_1, b_1, a_2, b_2, ...$. c) limit point of subsequence = {infinitely many points} It is a property of all real numbers, that, given any $\epsilon$, there exist a rational number within distance $\epsilon$ of the real number. Since the rational numbers are countable, we can order them: $r_1, r_2, r_3,\cdot\cdot\cdot$. That is, the set of all rational numbers form a sequence having all real numbers as subsequential limits, and there are, of course, an infinite number of them. d) limit point of subsequence = Q ∩ [0,1] Since there are an infinite number of rational numbers in 0, 1, any example for d is also an example for c. But, unfortunately, the example I gave for c does not work since its set of sequential points is too big- it includes much more than the rational numbers. I don't think you will be able to give a specific, numerical, example. You might do something like this: Since the set of rational numbers in [0, 1] is countable, it is possible to put them into a sequence: $r_1, r_2, r_3, \cdot\cdot\cdot$. It is a property of all real numbers, and in particular of rational numbers, that given any rational number, $r_i$, there exist a sequence of rational numbers, [tex]\{a_{ij}\}[tex] that converges to that rational $r_i$. Now "interleave" those: $\{a_{11}, a_{12}, a_{13}, \cdot\cdot\cdot\, a_{21}, a_{22}, a_{32}, \cdot\cdot\cdot, a_{31}, a_{32}, a_{33}, \cdot\cdot\cdot\ \}$.
2017-10-20 02:29:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8287500143051147, "perplexity": 604.2644340185846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823605.33/warc/CC-MAIN-20171020010834-20171020030834-00626.warc.gz"}
http://www.research.lancs.ac.uk/portal/en/publications/onedimensional-scaling-limits-in-a-planar-laplacian-random-growth-model(195c57f4-7dda-424a-bb41-40c33e969eea).html
Home > Research > Publications & Outputs > One-dimensional scaling limits in a planar Lapl... ### Electronic data • 1804.08462v1 Rights statement: The final publication is available at Springer via http://dx.doi.org/10.1007/s00220-019-03460-1 Accepted author manuscript, 620 KB, PDF-document ## One-dimensional scaling limits in a planar Laplacian random growth model Research output: Contribution to journalJournal article We consider a family of growth models defined using conformal maps in which the local growth rate is determined by $|\Phi_n'|^{-\eta}$, where $\Phi_n$ is the aggregate map for $n$ particles. We establish a scaling limit result in which strong feedback in the growth rule leads to one-dimensional limits in the form of straight slits. More precisely, we exhibit a phase transition in the ancestral structure of the growing clusters: for $\eta>1$, aggregating particles attach to their immediate predecessors with high probability, while for \$\eta
2019-08-24 14:16:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7618160247802734, "perplexity": 1553.5698037849604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321140.82/warc/CC-MAIN-20190824130424-20190824152424-00102.warc.gz"}
https://www.tutorialspoint.com/What-does-the-end-function-do-in-jQuery
# What does the .end() function do in jQuery? jQueryWeb DevelopmentFront End Technology The end() method reverts the most recent destructive operation, changing the set of matched elements to its previous state right before the destructive operation. ## Example You can try to run the following code to learn how to work with end function in jQuery: Live Demo <html> <title>jQuery end() function</title> <script> $(document).ready(function(){$("p").find("span").end().css("border", "2px blue solid"); }); </script> <style> p{ margin:10px; </html>
2021-07-25 13:19:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34843870997428894, "perplexity": 6557.281675955923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151672.96/warc/CC-MAIN-20210725111913-20210725141913-00182.warc.gz"}
https://en.wikipedia.org/wiki/User_talk:Lithopsian
# User talk:Lithopsian Added new article Aradial Networks, very simple and similar to Aptilo Networks I hope it will be approved this time. ## VY Orionis You missed a publication called "Notes on VY Ori", which is about both VY Ori and VV Ori. Also, if you look up 2MASS J05333588-050132, there are some archives. SpaceDude777 (talk) December 16, 2016 ## New Page Reviewer Newsletter Hello Lithopsian, thank you for your efforts reviewing new pages! Backlog update: • The new page backlog is currently at 16,991 pages. We have worked hard to decrease from over 22,000, but more hard work is needed! Please consider reviewing even just a few pages a a day. Technology update: • Rentier has created a NPP browser in WMF Labs that allows you to search new unreviewed pages using keywords and categories. General project update: • The Wikimedia Foundation Community Tech team is working with the community to implement the autoconfirmed article creation trial. The trial is currently set to start on 7 September 2017, pending final approval of the technical features. • Please remember to focus on the quality of review: correct tagging of articles and not tagbombing are important. Searching for potential copyright violations is also important, and it can be aided by Earwig's Copyvio Detector, which can be added to your toolbar for ease of use with this user script. • To keep up with the latest conversation on New Pages Patrol or to ask questions, you can go to Wikipedia talk:New pages patrol/Reviewers and add it to your watchlist. If you wish to opt-out of future mailings, go here. TonyBallioni (talk) 20:33, 24 August 2017 (UTC) I think you are a great user, and that you deserve a higher place, like being an admin. Thus, I would like to appoint you for adminship, (since you met most of the standards) so what do you think and what do you think of it? I think it will be great. --Joey P. - THE OFFICIAL —Preceding undated comment added 05:32, 8 September 2017 (UTC) Thanks for the vote of confidence. I don't think I'm quite ready for this yet. Or perhaps ever. There's lots of things I'm good at, but lots of things I'm not good at. I enjoy editing, probably wouldn't enjoy admin'ing quite so much. Thanks again for the offer to nominate me. Lithopsian (talk) 13:30, 8 September 2017 (UTC) Wikipedia:Requests for adminship.ZaperaWiki44 (talk) 13:04, 11 September 2017 (UTC) It would be great that you also become a checkuser and an oversight. ZaperaWiki44 (talk) 13:00, 15 September 2017 (UTC) ## equinox terms again I see you have edited the article to show that the names of the equinoxes are not reversed in the southern hemisphere even though there was no consensus on the talk page to that effect. Sources were even cited that said they are reversed. Yes, it would be convenient if everyone just used the same names, but that just isn't the case. So I am asking if we can please change it back to show that there is some ambiguity in the terms. --Lasunncty (talk) 05:13, 17 September 2017 (UTC) Sorry, but I can't do that. If you really believe there is significant mis-use of the term Vernal Equinox, perhaps that should be mentioned in the article, but I don't feel that a few misunderstandings in popular press are sufficient to change the meaning of a solidly-defined scientific term like this. I would also suggest that the place for discussion is the talk page of (one of) the articles, rather than here. "Private" discussions might sometimes be helpful, but can also give the impression of cooking up side deals and trying to sidestep consensus and full discussion. The discussion at Talk:Equinox unfortunately just got archived (still visible at Talk:Equinox/Archive_2 - note the previous formal merge proposal with pretty threadbare discussion) although the one at Talk:March_equinox is still alive and kicking with more smoke than fire. I still think a merge (several?) would be beneficial but it isn't a quagmire I have time to wade into right now. Perhaps just a serious copyedit, but I can see it descending into chaos. A really experienced editor might be able to pull it off. Lithopsian (talk) 19:07, 17 September 2017 (UTC) I went directly to you because you made the change despite lack of consensus. You removed the mention of ambiguity that was there, giving the impression that everyone uses the same terms, which is not true. And for what it's worth, the sources you cited don't mention the seasons in the southern hemisphere, so I don't think they can be used to support your position. --Lasunncty (talk) 08:08, 19 September 2017 (UTC) The citations I added support a definition of the terms Vernal equinox and Autumnal equinox, whether you agree with it or not; the existence of the southern hemisphere is irrelevant to that definition. I removed a bald statement of fact that the Vernal equinox is in September in the southern hemisphere, because it was uncited and directly contradicted that referenced definition. I have now added a description of this and further references, which will no doubt be controversial. Nevertheless, the citations stand and I will continue to remove contradictory or mis-placed statements that are not cited. As it stands, the article as a whole is very poorly referenced. Statements that I think could be verifiable, but are not currently cited, will get tagged if I'm feeling lazy and possibly cited if I'm feeling enthused; I've just done a major drive-by which isn't pretty but could at least be a starting point for adding much-needed verifiable sources. Lithopsian (talk) 13:41, 19 September 2017 (UTC) I appreciate that you're here talking rather than just edit-warring, but I doubt we'll achieve consensus between the two of us. This is an important article (in Wikipedia terms, just not near the top of my to-do list) and deserves to be a lot better than it is. Perhaps drumming up some interest from project pages might help, or just starting the right discussion on (one of the!) article talk pages. Lithopsian (talk) 14:36, 19 September 2017 (UTC) The two additional sources you added to the seasonal terms (now numbered 7 and 8) illustrate the discrepancy very clearly. Thank you. --Lasunncty (talk) 02:02, 20 September 2017 (UTC) This discussion touches upon the wider issue of the many journalists with media degrees who do not understand some fairly basic science or its terminology. Verifiability is laudable but in the truth v verification debate even JW has commented (Jimbo Wales (talk) 04:52 UTC 1 September 2011) that editors shouldn't publish untruths, even if there are many independent tabloid citations to support the assertion. PS Thanks for the thanks. :) Astronomy Explained (talk) 15:29, 22 September 2017 (UTC) ## New Page Reviewer Newsletter Hello Lithopsian, thank you for your efforts reviewing new pages! Backlog update: • The new page backlog is currently at 14304 pages. We have worked hard to decrease from over 22,000, but more hard work is needed! Please consider reviewing even just a few pages a day. • Currently there are 532 pages in the backlog that were created by non-autoconfirmed users before WP:ACTRIAL. The NPP project is undertaking a drive to clear these pages from the backlog before they hit the 90 day Google index point. Please consider reviewing a few today! Technology update: • The Wikimedia Foundation is currently working on creating a new filter for page curation that will allow new page patrollers to filter by extended confirmed status. For more information see: T175225 General project update: • On 14 September 2017 the English Wikipedia began the autoconfirmed article creation trial. For a six month period, creation of articles in the mainspace of the English Wikipedia will be restricted to users with autoconfirmed status. New users who attempt article creation will now be redirected to a newly designed landing page. • Before clicking on a reference or external link while reviewing a page, please be careful that the site looks trustworthy. If you have a question about the safety of clicking on a link, it is better not to click on it. • To keep up with the latest conversation on New Pages Patrol or to ask questions, you can go to Wikipedia talk:New pages patrol/Reviewers and add it to your watchlist. If you wish to opt-out of future mailings, go here. TonyBallioni (talk) 02:16, 19 September 2017 (UTC) ## Nu Persei Hi, thanks for the revert. I was convinced that Nu Persei was an RR Lyrae variable. I have checked the information, which is somewhat contradictory. Instead of speaking here, we should probably discuss this topic in Nu Persei's talk page. Eynar Oxartum (talk) 15:40, 19 September 2017 (UTC) ## NGC 479 image removal/previous version deletion questions Why was the Space Engine image on NGC 479 removed? The file page says something about deleting previous versions but keeping the file. This is very confusing. Can you please explain? And can I re-add it to the article? Thanks. – Batreeq (Talk) (Contribs) 21:14, 8 October 2017 (UTC) That image is known as a "non-free" image. It is copyrighted and is only included in Wikipedia on the basis of "fair use". This is a tricky legal term, but for Wikipedia it means that occasionally such images will be allowed where there are no equivalent free images and where it is appropriate for "identification of, and critical commentary on" the software that generated it. The image is included on that basis in the SpaceEngine article (which appears slightly doubtful to me, but it will be looked over carefully). The image resolution has been reduced to meet WP guidelines for non-free images, and the previous high-resolution image will be deleted. You don't need to do anything (unless you think the copyright assessment is wildly in error), but don't re-upload that image in a higher resolution or other Space Engine images without careful checking with the copyright gurus. As for the NGC 479 page, I removed the image as it definitely doesn't meet the non-free use guidelines in that article. Lithopsian (talk) 13:33, 9 October 2017 (UTC) ## Antares I have just made a significant reconstruction of the Properties section of the star Antares, and have especially fixed up the size issues, which were fragmented and confusing. Knowing your past edits on such stars, could you at least please double-check my work. I also removed two terribly poor reference cites given by JoeyPknowsalotaboutthat and after reading this[1], there might be some possible 'issues' coming. Thanks. Arianewiki1 (talk) 01:13, 17 October 2017 (UTC) Just to let you know. I appreciated your recent updates and corrections to this Antares article here. It is certainly an improvement from the older version. Thanks for looking at this. Arianewiki1 (talk) 02:47, 19 October 2017 (UTC) For your latest edit: ${\displaystyle R=3.4}$ astronomical units ${\displaystyle R=3.4}$ AU = ${\displaystyle D=1.107}$ billion km ${\displaystyle =R:796}$ R (rounded) -> 800 R jumk.de Stars and Planets also says 796 R for Antares, but I did not use it. I think that value on jumk.de was taken from Jim Kaler's stars. Thank you. --Joey P. - THE OFFICIAL (Visit/Talk/Contribs) 20:09, 20 October 2017 (UTC) ?? ${\displaystyle R=3.4}$ AU = ${\displaystyle D=1.107}$ billion km 3.4 AU = 510 million km. = 1.020 billion km. Please do your homework. Kaler actually says: "A low temperature coupled with high luminosity tells us that the star must be huge, luminosity and temperature giving a radius of about 3 Astronomical Units. It is so big that astronomers can easily detect and measure the size of its apparent disk, which gives an even bigger radius of 3.4 AU, 65 percent the size of the orbit of Jupiter. The difference is caused by uncertainties in distance, temperature, the state of pulsation, and the actual location of the mass-losing surface..." If you use 3 AU you get 588 R or about 600 R. Article says 680 R. Fair enough. It's cited, it's reasonable, and it now has the needed consensus. Clearly you are just cherry-picking larger values and ignoring for the nth time the problems gross errors as already explained to you and now exampled in Kaler's own text. So far you have stretched Lithopsian patience[2] to breaking point, now mine. Any further disruptive editing like this and then arbitration processes will immediately begin. So drop it... the unnecessary discussion is now over. Arianewiki1 (talk) 00:23, 21 October 2017 (UTC) Thank you, I did not notice a typo there. Check out the latest edit I made with a verified higher radius. --Joey P. - THE OFFICIAL (Visit/Talk/Contribs) 00:53, 21 October 2017 (UTC) ## Notice of Tendentious editing noticeboard discussion Hello. This message is being sent to inform you that there is currently a discussion involving you at Wikipedia:Administrators' noticeboard/Incidents regarding a possible violation of Wikipedia's policy on tendentious editing. The thread is Wikipedia:Administrators' noticeboard/Incidents#WP:TE violations by JoeyPknowsalotaboutthat. Thank you. Arianewiki1 (talk) 04:18, 21 October 2017 (UTC) ## New Page Reviewer Newsletter Hello Lithopsian, thank you for your efforts reviewing new pages! Backlog update: • The new page backlog is currently at 12,878 pages. We have worked hard to decrease from over 22,000, but more hard work is needed! Please consider reviewing even just a few pages a day. • We have successfully cleared the backlog of pages created by non-confirmed accounts before ACTRIAL. Thank you to everyone who participated in that drive. Technology update: • Primefac has created a script that will assist in requesting revision deletion for copyright violations that are often found in new pages. For more information see User:Primefac/revdel. General project update: If you wish to opt-out of future mailings, go here. TonyBallioni (talk) 17:47, 21 October 2017 (UTC) ## Deletion of "Garrett / teh ROBLOX Player" Hi there. This is Garrett / teh ROBLOX Player. (http://en.wikipedia.org/wiki/User:Garretttehrobloxplayer) I understand the reason why my article was deleted, but I have a question. Can you please restore the page temporarily, or for a few days? This is only so I can save a copy of the work. Please let me know by editing on the new page: http://en.wikipedia.org/wiki/Draft:Gtrp. (This new page can be deleted with the re-deletion of the original page.) ## NGC Object redirect pages I just removed the description of the remaining redirections to NGC objects and added instead. Sorry for not doing that right away! The pages should now be OK for approval. WolreChris (talk) 17:45, 28 October 2017 (UTC) ## Sager Electronics Sager Electronics Update Hello, I just wanted to let you know that Sager Electronics was updated before the deletion. I removed all salesy terminolgy and rewrote so not to infringe on copyright. — Preceding unsigned comment added by Justinsmarshall (talkcontribs) 13:09, 8 November 2017 (UTC) ## N11 (emission nebula)/Bean Nebula Thank you for your edits on my article on N11, I had thought that N11 was called the Bean Nebula, I can change my redirect to the relevant article if necessary.D Eaketts (talk) 21:56, 10 November 2017 (UTC) Proper names are messy. Who's to say what's right and what's wrong? There are no catalogs, very often the origin of the name isn't even clear. There's at least one other object in the LMC that I've seen called the Bean Nebula. The ESA and NASA sources for the main image are also pretty poorly-worded, giving the impression that the bean-shaped blob is the whole of N11 when it is in fact just a small portion - that happens to be IMO the Bean Nebula, and happens to be for sure NGC 1763 and LHA120-N11B. Lithopsian (talk) 22:02, 10 November 2017 (UTC) I will leave it as N11 for the time being unless it changes in the near future. D Eaketts (talk) 22:13, 10 November 2017 (UTC)
2017-11-17 19:51:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41012540459632874, "perplexity": 2363.9712472468996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934803906.12/warc/CC-MAIN-20171117185611-20171117205611-00375.warc.gz"}
https://crypto.stackexchange.com/questions/74246/zero-knowledge-proof-for-opening-of-pedersen-commit-and-discrete-logarithm
# Zero knowledge proof for opening of Pedersen commit and discrete logarithm I am looking for a proof of knowledge as such: $$PK\{ (x,r) : C = g^xh^r \land V = g^x\}$$ Where $$C, V, g$$ and $$h$$ are public information and $$x$$ and $$r$$ is known only to the prover. I.e. I have a Pedersen commit and a public key and I want to prove in zero knowledge that the committed value is the private key. Is there such a construct available? This is a simple way of proving knowledge of a discrete log; in the noninteractive version, to prove the knowledge of $$x$$ s.t. $$a = g^x$$ (assuming a public hash function $$H$$), the prover picks a random value $$r$$, computes $$t = g^r$$, $$c = H(t)$$, and then publishes $$t$$ and $$s = r + cx$$. The verifier then checks whether $$g^s = t a^c$$ To prove the statement you want, you first publish a proof that you know $$x$$ s.t. $$V = g^x$$ The second proof is that you know $$r$$ s.t. $$C V^{-1} = h^r$$
2021-03-07 02:50:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6223036646842957, "perplexity": 425.656076793521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376006.87/warc/CC-MAIN-20210307013626-20210307043626-00504.warc.gz"}
https://math.stackexchange.com/questions/2505651/a-convex-function-with-a-non-empty-domain-interior-in-a-non-barreled-locally-con
# A convex function with a non-empty domain interior in a non-barreled locally convex space 1. Given a (Hausdorff separated) locally convex space $X$ what can we say about a proper convex function $f:X\to\mathbb{R}$ whose domain $\emptyset\neq D(f):=\{x\in X\mid f(x)<+\infty\}$ has a non-empty (topological) interior? Recall that when $X$ is barreled the non-empty domain interior spells that the function is continuous on the interior of its domain. So what can happen when the space is not barreled other than continuity? My take so far in this is to consider the simplest type of a convex function: a sublinear function (positively homogeneous and subadditive). I took $B\subset X^*$ (the topological dual) and $f=\sigma_B$ the support function of $B$; $\sigma_B(x):=\sup_{x^*\in B}x^*(x)$, $x\in D(\sigma_B)$ (the barrier cone of $B$). My reformulation of the problem in this case is: 1. What can one say about a set whose barrier cone has a non-empty interior? To simplify even more, in case $0\in {\rm core}\ {D(\sigma_B)}$ (core denotes here the algebraic interior) we get that $D(\sigma_B)=X$ because $D(\sigma_B)$ is an absorbing cone. We find that $B$ is (weak-star) bounded.
2019-09-23 13:41:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9307501316070557, "perplexity": 413.24902766039264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576965.71/warc/CC-MAIN-20190923125729-20190923151729-00091.warc.gz"}
https://www.thejournal.club/c/paper/55020/
#### Expressiveness via Intensionality and Concurrency ##### Thomas Given-Wilson Computation can be considered by taking into account two dimensions: extensional versus intensional, and sequential versus concurrent. Traditionally sequential extensional computation can be captured by the lambda-calculus. However, recent work shows that there are more expressive intensional calculi such as SF-calculus. Traditionally process calculi capture computation by encoding the lambda-calculus, such as in the pi-calculus. Following this increased expressiveness via intensionality, other recent work has shown that concurrent pattern calculus is more expressive than pi-calculus. This paper formalises the relative expressiveness of all four of these calculi by placing them on a square whose edges are irreversible encodings. This square is representative of a more general result: that expressiveness increases with both intensionality and concurrency. arrow_drop_up
2022-08-09 04:07:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8071417212486267, "perplexity": 2605.078188898383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570901.18/warc/CC-MAIN-20220809033952-20220809063952-00584.warc.gz"}
http://arminstraub.com/talk/densities-mpim
arminstraub.com # Talk: Arithmetic aspects of short random walks (MPIM) Arithmetic aspects of short random walks (MPIM) Date: 2013/02/13 Occasion: Number Theory Lunch Seminar Place: Max-Planck-Institut für Mathematik, Bonn ## Abstract We revisit a classical problem: how far does a random walk travel in a given number of steps (of length 1, each taken along a uniformly random direction)? Although such random walks are asymptotically well understood, surprisingly little is known about the exact distribution of the distance after just a few steps. For instance, the average distance after two steps is (trivially) given by 4/pi; but what is the average distance after three steps? In this talk, we therefore focus on the arithmetic properties of short random walks and consider both the moments of the distribution of these distances as well as the corresponding density functions. It turns out that the even moments have a rich combinatorial structure which we exploit to obtain analytic information. In particular, we find that in the case of three and four steps, the density functions can be put in hypergeometric form and may be parametrized by modular functions. Much less is known for the density in case of five random steps, but using the modularity of the four-step case we are able to deduce its exact behaviour near zero. Time permitting, we will also discuss connections with Mahler measure.
2017-11-22 05:26:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8143084645271301, "perplexity": 329.8952651331959}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806465.90/warc/CC-MAIN-20171122050455-20171122070455-00706.warc.gz"}
http://ibl.kb.nl/articles/0000000020950/342/3/lang/uk
Digital Library Close Browse articles from a journal Journal description All volumes of the corresponding journal All issues of the corresponding volume All articles of the corresponding issues 10 results found no title author magazine year volume issue page(s) type 1 An Invariant of Topologically Ordered States Under Local Unitary Transformations Haah, Jeongwan 2016 342 3 p. 771-801 article 2 A Refined Threshold Theorem for (1 + 2)-Dimensional Wave Maps into Surfaces Lawrie, Andrew 2015 342 3 p. 989-999 article 3 Bott Periodicity for $${\mathbb{Z}_2}$$Z2 Symmetric Ground States of Gapped Free-Fermion Systems Kennedy, R. 2015 342 3 p. 909-963 article 4 Codimension One Threshold Manifold for the Critical gKdV Equation Martel, Yvan 2015 342 3 p. 1075-1106 article 5 Liouville Quantum Gravity on the Riemann Sphere David, François 2016 342 3 p. 869-907 article 6 Non-Relativistic Twistor Theory and Newton–Cartan Geometry Dunajski, Maciej 2016 342 3 p. 1043-1074 article 7 On a Kinetic Fitzhugh–Nagumo Model of Neuronal Network Mischler, S. 2016 342 3 p. 1001-1042 article 8 Orbifold Construction of Holomorphic Vertex Operator Algebras Associated to Inner Automorphisms Lam, Ching Hung 2015 342 3 p. 803-841 article 9 Painlevé Representation of Tracy–Widom$${_\beta}$$β Distribution for $${\beta}$$β = 6 Rumanov, Igor 2015 342 3 p. 843-868 article 10 Universal Probability Distribution for the Wave Function of a Quantum System Entangled with its Environment Goldstein, Sheldon 2015 342 3 p. 965-988 article 10 results found Koninklijke Bibliotheek - National Library of the Netherlands
2020-08-11 04:30:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3229270279407501, "perplexity": 3727.893409773735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738727.76/warc/CC-MAIN-20200811025355-20200811055355-00448.warc.gz"}
http://mathhelpforum.com/number-theory/31788-sets-without-squares.html
Let $S_i$ be the set of all integers $n$ such that $100i\leq n < 100(i + 1)$. For example, $S_4$ is the set $\{400,401,402,\ldots,499\}$. How many of the sets $S_0, S_1, S_2, \ldots, S_{999}$ do not contain a perfect square? Let $S_i$ be the set of all integers $n$ such that $100i\leq n < 100(i + 1)$. For example, $S_4$ is the set $\{400,401,402,\ldots,499\}$. How many of the sets $S_0, S_1, S_2, \ldots, S_{999}$ do not contain a perfect square?
2016-09-29 09:07:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9123833775520325, "perplexity": 17.234866389612314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661780.47/warc/CC-MAIN-20160924173741-00185-ip-10-143-35-109.ec2.internal.warc.gz"}
http://openstudy.com/updates/4f94a3bee4b000ae9ecad80b
## Mcurtis71 Group Title The distance between two charged objects is doubled. What happens to the electrostatic force between the two? 2 years ago 2 years ago • This Question is Open 1. RaphaelFilgueiras Group Title |dw:1335141437944:dw| 2. .Sam. Group Title $F=k\frac{q_{1}q_{2}}{r^2}$ So if double the distance, $F=k\frac{q_{1}q_{2}}{2^2}$ $F=k\frac{q_{1}q_{2}}{4}$ Force is 4 times as low
2014-07-23 18:16:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5070363879203796, "perplexity": 1945.96684697258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997882928.29/warc/CC-MAIN-20140722025802-00158-ip-10-33-131-23.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/579642/two-subgroups-h-1-h-2-of-a-group-g-are-conjugate-iff-g-h-1-and-g-h-2-ar
# Two subgroups $H_1, H_2$ of a group $G$ are conjugate iff $G/H_1$ and $G/H_2$ are isomorphic Let $H_1$ and $H_2$ be subgroups of some group $G$. Prove that the left $G$-sets $G/H_1$ and $G/H_2$ are isomorphic (as left $G$-sets) iff the subgroups $H_1$ and $H_2$ are conjugate. If $H_1$ and $H_2$ are conjugate, then they are isomorphic, thus $G/H_1 \cong G/H_2$. I am having problems with the other direction! • Could you please add the definition of "Isomorphic $\;G$ - sets" ? – DonAntonio Nov 24 '13 at 20:10 • Two $G$-sets $X$ and $Y$ are called isomorphic if there is a bijection $\varphi : X\to Y$ with $\varphi(gx) = g\varphi(x)$ for all $g\in G$ and all $x\in X$. – azimut Nov 24 '13 at 21:22 • No, however I am having trouble understanding why that would be useful? – Matt Costa Nov 24 '13 at 21:40 • be careful because two subgroups can be isomorphic without the corresponding quotients being isomorphic, so in your proof you should use that it isn't just any old isomorphism. – user2055 Nov 24 '13 at 22:28 • to expand on @Jason's comment, consider for example $G=\mathbb{Z}/2\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$ with subgroups $H_1=\mathbb{Z}/2\mathbb{Z}\oplus 1$ and $H_2=1\oplus \langle 2+4\mathbb{Z} \rangle$. Then $G/H_1\cong \mathbb{Z}/4\mathbb{Z}$, whereas $G/H_2\cong \mathbb{Z}/2\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. – Alexander Gruber Nov 25 '13 at 4:27 If $\phi : G/H_1 \to G/H_2$ is a homomorphism of $G$-sets, and $\phi([1])=[g]$, then it follows more generally that $\phi([x])=[xg]$. But $\phi$ should be well-defined, i.e. $y^{-1} x \in H_1$ implies $(y g)^{-1} x g \in H_2$. This reduces to $g^{-1} H_1 g \subseteq H_2$. Conversely, this relation implies that $\phi([x]):=[xg]$ is a well-defined homomorphism of $G$-sets. Similarly, $\phi$ is injective iff $g H_2 g^{-1} \subseteq H_1$. And $\phi$ is automatically surjective. It follows that $G/H_1 \cong G/H_2$ as $G$-sets iff $H_1$ and $H_2$ are conjugated. As already pointed out in the comments, it is really important to work in the category of $G$-sets here. But there aren't any alternatives anyway. Of course the category of sets is too weak, and the category of groups doesn't make sense since $H_1,H_2$ aren't assumed to be normal. • Generally, for an equivalence relation $\sim$ on a given set $X$, the notation $\left[x\right]_\sim$ (or simply $\left[x\right]$ when the relation is obvious) is used to denote the equivalence class of $x$. That is $$\left[x\right]=\left\{ y\in X\vert x\sim y\right\}$$ In our case, for $H\leq G$ we denote $\left[g\right]=gH$. Notice that in the above answer the coset is of $H_1$ in some usages and of $H_2$ in others. So for example $$\phi\left(\left[1\right]\right)=\left[g\right]$$ should be interpreted as $\phi\left(1H_{1}\right)=gH_{2}$ – 8l2s Jan 8 '17 at 13:23
2019-10-14 20:17:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9739435911178589, "perplexity": 135.4448322547876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655310.17/warc/CC-MAIN-20191014200522-20191014224022-00184.warc.gz"}
http://www.gbgb.org.uk/news/2/2017/1/496/Trainer-of-the-Year-Championship-Standings-up-to-and-including-Wednesday-11-January-2017
# Latest News ## Trainer of the Year Championship Standings, up to and including Wednesday 11 January 2017 Position Trainer Points W R % TPM 1 M A Wallis 40 8 19 42% £7,750 2 P J Simmonds 12 2 16 13% £1,600 3 P W Young 11 2 23 9% £2,200 =4 R J Holloway 10 1 3 33% £550 =4 K R Hutton 10 0 8 0% £770 6 E A Gaskin 7 3 13 23% £1,345 =7 D R Pruhs 6 2 5 40% £940 =7 P Janssens 6 2 5 40% £835 =9 D D Knight 5 1 1 100% £500 =9 M L Locke 5 1 4 25% £650 =9 J E Harvey 5 1 7 14% £950 =9 K P Boon 5 0 7 0% £600 =13 C Weatherall 2 2 2 100% £400 =13 A R Keppie 2 2 9 22% £560 =13 D J Elcock 2 1 1 100% £250 =13 W R Wrighting 2 1 3 33% £350 =13 F J Gray 2 1 3 33% £350 =13 S A Cahill 2 1 4 25% £500 =13 J J Heath 2 1 6 17% £500 =13 D Mullins 2 1 14 7% £1,150 =21 S Tighe 1 1 1 100% £200 =21 P A Harmes 1 1 1 100% £200 =21 M N Fenwick 1 1 1 100% £200 =21 A P Tuffin 1 1 1 100% £175 =21 J H Smith 1 1 1 100% £175 =21 R C Boosey 1 1 1 100% £175 =21 J S Battley 1 1 1 100% £175 =21 C Hopkins 1 1 1 100% £150 =21 A J Taylor 1 1 2 50% £250 =21 D Deakin 1 1 2 50% £240 =21 J M Walton 1 1 2 50% £240 =21 G Kovac 1 1 2 50% £205 =21 I W Mills 1 1 2 50% £205 =21 M T Newman 1 1 2 50% £205 =21 C A Perry 1 1 2 50% £190 =21 A E Gardiner 1 1 3 33% £300 =21 K E Humphreys 1 1 3 33% £245 =21 E J Cantillon 1 1 3 33% £210 =21 P J Rosney 1 1 4 25% £320 =21 L G Tuffin 1 1 4 25% £290 =21 C A Grasso 1 1 4 25% £265 =21 D B Whitton 1 1 4 25% £265 =21 J Bloomfield 1 1 5 20% £295 =44 J A Millard 0 0 1 0% £100 =44 J W Reynolds 0 0 1 0% £100 =44 D J Allen 0 0 1 0% £50 =44 J F Spracklen 0 0 1 0% £50 =44 M C B Collins 0 0 1 0% £50 =44 P J Dale 0 0 1 0% £50 =44 S Harms 0 0 1 0% £50 =44 B Denby 0 0 1 0% £40 =44 C T Lynch 0 0 1 0% £40 =44 D Calvert 0 0 1 0% £40 =44 E Upton 0 0 1 0% £40 =44 G A Griffiths 0 0 1 0% £40 =44 G Rankin 0 0 1 0% £40 =44 J R Hall 0 0 1 0% £40 =44 J W Bamber 0 0 1 0% £40 =44 N J Saunders 0 0 1 0% £40 =44 P Barlow 0 0 1 0% £40 =44 B Turner 0 0 1 0% £30 =44 C Price 0 0 1 0% £30 =44 D M Merchant 0 0 1 0% £30 =44 E H Howard 0 0 1 0% £30 =44 J A Bristow 0 0 1 0% £30 =44 J D Davy 0 0 1 0% £30 =44 J M Ray 0 0 1 0% £30 =44 K J Taylor 0 0 1 0% £30 =44 L Cook 0 0 1 0% £30 =44 N Savva 0 0 1 0% £30 =44 N Shine 0 0 1 0% £30 =44 P J Bartley 0 0 1 0% £30 =44 R S Griffin 0 0 1 0% £30 =44 R W Butler 0 0 1 0% £30 =44 S L Newberry 0 0 1 0% £30 =44 S Naylor 0 0 1 0% £30 =44 T A Johnson 0 0 1 0% £30 =44 T Batchelor 0 0 1 0% £30 =44 J J Luckhurst 0 0 2 0% £200 =44 A Kelly-Pilgrim 0 0 2 0% £130 =44 R H Peckover 0 0 2 0% £100 =44 J E Meek 0 0 2 0% £70 =44 P J Doocey 0 0 2 0% £70 =44 P Timmins 0 0 2 0% £70 =44 R B Harwood 0 0 2 0% £70 =44 A M Kibble 0 0 2 0% £60 =44 D N Lewis 0 0 2 0% £60 =44 H J Dimmock 0 0 2 0% £60 =44 J A Danahar 0 0 2 0% £60 =44 J G Hurst 0 0 2 0% £60 =44 N J Mcdonald 0 0 2 0% £60 =44 P Crowson 0 0 2 0% £60 =44 P J Dolby 0 0 2 0% £60 =44 R W Liddington 0 0 2 0% £60 =44 T W Hunter 0 0 2 0% £60 =44 G B Ballentine 0 0 3 0% £180 =44 N Mcellistrim 0 0 3 0% £150 =44 E T Parker 0 0 3 0% £120 =44 J L Mccombe 0 0 3 0% £120 =44 R Grey 0 0 3 0% £120 =44 K J Cobbold 0 0 3 0% £110 =44 M H Fawsitt 0 0 3 0% £90 =44 R M Emery 0 0 3 0% £90 =44 B Doyle 0 0 4 0% £200 =44 J E Hayton 0 0 4 0% £150 =44 I Bradford 0 0 4 0% £130 =44 P Mingay 0 0 4 0% £120 =44 C R Lister Obe 0 0 5 0% £190 =44 J G Mullins 0 0 8 0% £550
2018-01-23 23:38:07
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8773346543312073, "perplexity": 9304.543719175552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892802.73/warc/CC-MAIN-20180123231023-20180124011023-00788.warc.gz"}
http://math.stackexchange.com/questions/24218/does-the-definition-of-model-depend-on-a-theory-or-just-a-signature
# does the definition of model depend on a theory or just a signature? T:Σ-theory For M:Σ-model which satisfies T, is it proper to name it “Σ,T-model” or “Σ-model satisfying T”? In comparison, in the context of Lawvere theories, any Lawvere theory L incorporates a signature as well as axioms. If we construct L from Σ and T, any L-model obviously satisfies T. Therefore “Σ,T-model” seems as an appropriate name. - The usual terminology in model theory is that for a given language or signature $\Sigma$, one may speak of a structure in the signature $\Sigma$, or a $\Sigma$-structure, and this places no requirements on the theory of the model. When one has a theory $T$ in the language of $\Sigma$, one may have a model of $T$, which is a structure in that signature satisfying that theory. One ambiguity, as you notice, is that if a theory is regarded as a set of sentences, then one cannot fully tell the language from the theory, since perhaps some parts of the language were not used in the theory. This ambiguity is not actually ambiguous in practice if the signature $\Sigma$ is clear from context, and when it is not, then you are right that one must say what the language is, in the cases that this is a difference that matters. So in practice, this amounts essentially to the same as the Lawvere practice. An interesting example where the distinction is important is that it is possible for a theory $T$ to be decidable in a small language, but not in a larger language. For example, Presburger arithmetic is decidable in the language with just $+$, but not in the language having both plus and times, even when no additional axioms are added concerning times. Indeed, almost no theory will be decidable in a language that includes extra unmentioned binary or higher arity relation or function symbols, since the empty theory in the language with such relation or functions symbols is undecidable. In practice, researchers often use the term model when no theory has been specified, and in such cases, they usually mean structure, unless a particular theory and signature is implicit. Finally, regarding your proposal for the term "$\Sigma,T$-model", let me say that my own taste tends towards natural language formulations, where we could simply speak of a model of $T$ in the language $\Sigma$. - Thank you, very interesting. –  beroal Feb 28 '11 at 15:42 I found one source of confusion. In “Categorical Logic and Type Theory” by Bart Jacobs in Section 1.6 “Fibrations of signatures” a “model” means a structure as defined in your answer. I believe such confusion appears in other books. –  beroal Mar 14 '11 at 20:06
2015-07-31 18:20:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8478096723556519, "perplexity": 389.82223356197403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988310.3/warc/CC-MAIN-20150728002308-00301-ip-10-236-191-2.ec2.internal.warc.gz"}
https://library.kiwix.org/bitcoin.stackexchange.com_en_all_2021-04/A/question/57208.html
## how to convert blockchain to new version? 1 I have altcoin bassed on old Litecoin sources, end i try to convert it for latest available sources. How i can do this ? What need for this ? I try simple download it on old wallet, then run new. But mine new or download them from other wallet no possible. I try find info how do this. But find noting. I have this error when trying download block's from peer's ERROR: AcceptBlockHeader: Consensus::ContextualCheckBlockHeader: 90e718e6878f4b7ae4de4ae83db75881d00ca017f5a117c2054798bcb76c4178, bad-version(0x00000002), rejected nVersion=0x00000002 block (code 17) 2017-07-28 04:12:10 ProcessMessages(headers, 162003 bytes) FAILED peer=0 2017-07-28 04:12:10 receive version message: /Satoshi:1.0.0/: version 70002, blocks=3293, us=x.x.x.x:12815, peer=1 in validator i have this settings // Check proof of work if (block.nBits != GetNextWorkRequired(pindexPrev, &block, consensusParams)) return state.DoS(100, false, REJECT_INVALID, "bad-diffbits", false, "incorrect proof of work"); // Check timestamp against prev if (block.GetBlockTime() <= pindexPrev->GetMedianTimePast()) return state.Invalid(false, REJECT_INVALID, "time-too-old", "block's timestamp is too early"); // Check timestamp if (block.GetBlockTime() > nAdjustedTime + 2 * 60 * 60) return state.Invalid(false, REJECT_INVALID, "time-too-new", "block timestamp too far in the future"); // Reject outdated version blocks when 95% (75% on testnet) of the network has upgraded: // check for version 2, 3 and 4 upgrades if((block.nVersion < 2 && nHeight >= consensusParams.BIP34Height) || (block.nVersion < 3 && nHeight >= consensusParams.BIP66Height) || (block.nVersion < 4 && nHeight >= consensusParams.BIP65Height)) return state.Invalid(false, REJECT_OBSOLETE, strprintf("bad-version(0x%08x)", block.nVersion), strprintf("rejected nVersion=0x%08x block", block.nVersion)); if (block.nVersion < VERSIONBITS_TOP_BITS && IsWitnessEnabled(pindexPrev, consensusParams)) return state.Invalid(false, REJECT_OBSOLETE, strprintf("bad-version(0x%08x)", block.nVersion), strprintf("rejected nVersion=0x%08x block", block.nVersion)); return true; } in genesis block settings i have genesis = CreateGenesisBlock(1498204210, 215446, 0x1e0ffff0, 1, 500 * COIN); but i have in old sources this setting // Check that the block chain matches the known block chain up to a checkpoint if (!Checkpoints::CheckBlock(nHeight, hash)) return state.DoS(100, error("AcceptBlock() : rejected by checkpoint lock-in at %d", nHeight)); // Don't accept any forks from the main chain prior to last checkpoint CBlockIndex* pcheckpoint = Checkpoints::GetLastCheckpoint(mapBlockIndex); if (pcheckpoint && nHeight < pcheckpoint->nHeight) return state.DoS(100, error("AcceptBlock() : forked chain older than last checkpoint (height %d)", nHeight)); // Reject block.nVersion=1 blocks (mainnet >= 710000, testnet >= 400000) if (nVersion < 2) { if ((!fTestNet && nHeight >= 710000) || (fTestNet && nHeight >= 400000)) { return state.Invalid(error("AcceptBlock() : rejected nVersion=1 block")); } } // Enforce block.nVersion=2 rule that the coinbase starts with serialized block height if (nVersion >= 2) { if ((!fTestNet && nHeight >= 710000) || (fTestNet && nHeight >= 400000)) { CScript expect = CScript() << nHeight; if (vtx[0].vin[0].scriptSig.size() < expect.size() || !std::equal(expect.begin(), expect.end(), vtx[0].vin[0].scriptSig.begin())) return state.DoS(100, error("AcceptBlock() : block height mismatch in coinbase")); } } } what i need to change for it's start accept block's 5 The issue here is with: (block.nVersion < 3 && nHeight >= consensusParams.BIP66Height) Your block has version 2, but I assume the block you are trying to download is higher than BIP66Height so it rejects it. Its very difficult and requires a lot of expert know-how to modify the source code of a coin, and it sounds like you probably don't have the experience/skillset needed at this point, so I'd like to gently suggest you spend some more time learning about how bitcoin and the altcoin you are working on are coded, etc. before trying to make this modification, because you can't just copy and paste code from bitcoin into an altcoin and expect it to work. Different coins just aren't compatible with each other in that way. i know how to modify ,but problem is only with acceptable block... i cannot find info how work. – tseries – 2017-07-28T08:28:33.440 End not just copy past code. I try update , but stuck on this bip settings – tseries – 2017-07-28T08:59:17.707 1I'm afraid to say that if this is giving you difficulty, you probably don't have the skillset required for modifying the coin, as I said. I've voted to close this question as too broad because of that, it's simply not narrow enough to answer. – MeshCollider – 2017-07-28T11:26:03.777 You think i do not try so simple how change values of version's in this a piece sources ? Way you try answer if you do not know ? Find simple question end answer ... Way people how you try answer question if they do not know answer end if they answer is wrong they try quickly down vot question. – tseries – 2017-07-28T12:13:48.503 I know c++ , i know how create change any coin's , any wallet. But i only have problem with this bip. End i ask solution , it's faster then research all code if any who know this end can share it , you do not think that ? End for this i think been created this service , not for collect reputation which is worth nothing... + compilation on my hardware is very long process, end it make a problem to research all self on this moment. – tseries – 2017-07-28T12:29:32.827 2 @tseries Please do not insult people who are trying to help you. Assume food faith and be nice. https://bitcoin.stackexchange.com/help/be-nice – Nick ODell – 2017-07-29T17:45:56.017 sorry for that , i do not sleep 2 day's where trying to fix this ... – tseries – 2017-07-29T20:08:17.333
2021-08-04 14:34:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24093030393123627, "perplexity": 11163.035250144214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154878.27/warc/CC-MAIN-20210804142918-20210804172918-00098.warc.gz"}
https://www.transtutors.com/questions/recording-adjusting-journal-entries-using-the-information-in-m4-5-for-each-transacti-2632117.htm
# Recording Adjusting Journal Entries Using the information in M4-5, for each transaction, (1)... Recording Adjusting Journal Entries Using the information in M4-5, for each transaction, (1) identify the type of adjustment and (2) prepare the adjusting journal entry required on December 31, 2010. M4-5 Determining Effects of Adjustments For each of the following transactions for Sky Blue Company owned by sole proprietor Anna , give the effects amounts and direction of effect (+ for increase or − for decrease) of the adjustments required at the end of the month on December 31, 2010. Use the following form. If an element is not affected, write NE for no effect. a. Collected $1,200 rent for the period December 1, 2010, to February 28, 2011, that was credited to Unearned Rent Revenue on December 1, 2010. b. Paid$2,400 for a two-year insurance premium on December 1, 2010; debited Prepaid Insurance for that amount. c. Used a machine purchased on December 1, 2010, for $48,000. The company estimates annual depreciation of$4,800.
2018-08-17 05:38:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18376705050468445, "perplexity": 3576.483632938018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211719.12/warc/CC-MAIN-20180817045508-20180817065508-00182.warc.gz"}
http://physics.stackexchange.com/tags/mass-energy/hot
# Tag Info 16 You've probably heard of Einstein's famous equation: $$e = mc^2$$ This states that mass and energy are equivalent, and indeed the LHC turns energy into matter every day. So to find the mass equivalent to an electron volt just convert eV to Joules and divide by $c^2$. 1 electron volt = $1.60217646 \times 10^{-19}$ joules, so 125 GeV is: $$125 \times ... 16 Yes, the total mass of a battery increases when the battery is charged and decreases when it is discharged. The difference boils to Einstein's E=mc^2 that follows from his special theory of relativity. Energy is equivalent to mass and c^2, the squared speed of light, is the conversion factor. I would omit the scenario I. If the lithium is leaking from ... 16 Because time is accumulating, to calculate the time lapse, you integrate. The elementary time interval transforms like mass. The difference is that the total time lapse is done by "summing" over all elementary intervals. For the mass, you don't do this. For mass:$$m=\frac {m_0} {\sqrt{1-\frac{v^2}{c^2}}}$$For time:$$dt=\frac {dt_0} ... 14 To answer the question simply, $E=mc^2$. Energy is a manifestation of mass, and mass is a manifestation of energy. In a fusion or fission process, the total "energy" of the system remains constant, it just changes shape. By "energy" I mean the totality of the already present energy, and the bound energy of the mass that takes part in the reaction. 12 The answer is the there is some reduction in mass whenever energy is released, whether in nuclear fission or burning of coal or whatever. However, the amount of mass lost is very small, even compared to the masses of the constituent particles. A good overview is given in the Wikipedia article on mass excess. Basically, the mass of a nucleus will in general ... 10 You can find the shortest and easiest derivation of this result in the paper where it was released by Einstein himself (what better reference can you find?) in 1905. It is not the main paper of Special Relativity, but a short document he added shortly afterwards. A. Einstein,Ist die Trägheit eines Körpers von seinem Energieinhalt Abhängig?, Annalen der ... 10 $E = mc^2$ is only the equation for the "rest energy" of a particle/object. The full equation for the kinetic energy of a moving particle is actually: $E = \gamma mc^2 - mc^2$ where $\gamma$ is defined as $\gamma = \frac{1}{\sqrt{1 - (v/c)^2}}$ where $v$ is the relative velocity of the particle. An "intuitive" answer to the question can be seen by ... 10 The conversion between mass and energy isn't even really a conversion. It's more that mass (or "mass energy") is a name for some amount of an object's energy. But the same energy that you call the mass can actually be a different type of energy, if you look closer. For example, we say that a proton has a specific amount of mass, about $2\times 10^{-27}\text{ ... 9 Take a nucleus of U-235 and determine its mass. Induce it to fission by firing a neutron at it. When it does so, collect all the pieces (except the extra neutron) and determine their total mass. You will find that all the pieces weigh just a hair less than original nucleus. The difference is the "binding energy", also previously known as the "packing ... 8 This is actually a more complex question than you might think, because the distinction between mass and energy kind of disappears once you start talking about small particles. So what is mass exactly? There are two common definitions: The quantity that determines an object's resistance to a change in motion, the$m$in$\sum F = ma$The quantity that ... 8 As noted by someone else, energy can be "converted" into mass e.g. via pair production. However, there is another example of this that you may be interested in: The mass of the matter you come into contact with on an everyday basis is almost entirely from protons and neutrons, which are roughly 2000x more massive than electrons. The proton, for example, is ... 7 Starting with your given equation, we add$p^2 c^2$to both sides to get $$E^2=m^2 c^4 + p^2 c^2$$ now using the definition of relativistic momentum$p=\gamma m v$we substitute that in above to get $$E^2 = m^2 c^4 +(\gamma m v)^2 c^2=m^2 c^4 +\gamma^2 m^2 v^2 c^2$$ Now, factoring out a common$m^2 c^4$from both terms on the RHS in anticipation of the ... 7 As you may know, photons do not have mass. Relating relativistic momentum and relativistic energy, we get:$E^2 = p^2c^2+(mc^2)^2$. where$E$is energy,$p$is momentum,$m$is mass and$c$is the speed of light. As mass is zero,$E=pc$. Now, we know that$E=hf$. Then we get the momentum for photon. Note that there is a term called effective inertial ... 7 If I'm reading rightly, I think your main question is: Why does only a small percentage of rest mass turn into energy [even for fusion]? It's because the universe is very strict about a certain small set of conservation rules, and certain combinations of these rules make ordinary matter extremely stable. Exactly why these rules are so strictly observed ... 7 Yes, everything generates a gravitational field, whether it is massive or massless like a photon. The source of the gravitational field is an object called the stress-energy tensor. This is normally written as a 4 x 4 symmetric matrix, and the top left entry is the energy density. Note that mass does not appear at all. We convert mass to energy by ... 7 Energy and matter are not the same. Matter is a type of thing, whereas energy is a property of a thing, like velocity or volume. So your premise is flawed. In particular: there's no such thing as "a solid state of energy" - hopefully it makes sense that a property of something does not have states energy is not represented by waves, though it is a property ... 7 It's certainly possible for a particle's mass to come partially from kinetic energy of massless particles; for example, about half of a proton's mass is the kinetic energy of its gluons. But the kind of mass that fundamental particles have, the kind that comes from the Higgs mechanism, doesn't appear to be of that kind. Maybe someday we will discover that it ... 6 This equation is incredibly generic and describes many phenomena outside of nuclear phenomena. For example: place the the following setup in a box (a spring and some bars) and weigh them: . Now, loosen the spring and repeat. You should measure a smaller mass because you've removed some of the energy. In reality you couldn't possibly measure the ... 6 Assume that we started with a dry cloth, use a dry iron (no steam) and don't initiate any chemistry (you don't want to burn the collar, after all). There are two way to look at this. On a macroscopic scale the cloth (taken as a whole) has gained internal energy, so it simply is more massive. No "transformation" is required. The energy exists in the more ... 6 Your kinetic energy formula is incorrect. The correct one for a massive particle is $$KE = m c^2\left[ \frac{1}{\sqrt{ 1 - v^2/c^2}} - 1 \right]$$ with $$p = \frac{ m v}{\sqrt{1 - v^2/c^2}}$$ Now, to describe a photon, we wish to take two limits$m \to 0$and$v \to c$. These limits must be taken properly. To be precise, we wish to take this limit in ... 6 In the early days of special relativity it was noted that the mass of an object appeared to increase as the speed of the object approached the speed of light. It was common to see the notation$m_0$used for the rest mass and$m$for the relativistic mass. In this sense the equation$E = mc^2$is always true. However the concept of relativistic mass is ... 6 The equation, properly understood is: $$E = \gamma m c^2$$ where$m$is the invariant mass (or the deprecated "rest mass"). Now, for a photon, the invariant mass is zero. But this does not imply that$E$is zero since the Lorentz factor$\gamma \rightarrow \infty$as the speed goes to c. Thus, this equation has an indeterminate form for a massless ... 6 It is incorrect to say that the energy of a string directly gives us the mass of the particle. While it is true that more the oscillations on the string, higher the mass, the relation between the oscillations and the mass it not that of a simple proportionality. What's really happening is that the string has some energy$E$(due to oscillations on it) and a ... 5 It is the convention of setting the velocity of light$c=1$that allows for this, the natural units, otherwise it is$\mathrm{GeV}/c^2$The rest mass energy connection $$E^2=p^2+m^2$$ at rest then the mass is identified with energy in natural units. 5 To understand binding energy and mass defects in nuclei, it helps to understand where the mass of the proton comes from. The news about the recent Higgs discovery emphasizes that the Higgs mechanism gives mass to elementary particles. This is true for electrons and for quarks which are elementary particles (as far as we now know), but it is not true for ... 5 The Einstein's mass-energy relation,$E = mc^2$, gives the total energy content of the system. But this is not the energy we get from the object. When you annihilate an electron with a positron, both particles vanish so that the released energy is equal to the energy of the two particles according to Einstein's formula. But when you burn 1 kg of wood you ... 5 In order to answer this question, you should first ask yourself what you mean by "object". From an elementary particle perspective, every particle has a characteristic constant rest mass. These masses aren't thought to change, just like the charge of an electron doesn't ever change. So in this sense, the answer to your question is "no, you cannot accelerate ... 5 This is cool because$E=mc^2$can act as some sort of uncertainty relation; if you have a population of photons with energy$E$, they are engendered with a mass$\frac{E}{c^2}$, no matter what my intuition says. (Is that right?) No, it's not quite right. In relativity, it turns out that energy and momentum are parts of a single four-dimensional vector, ... 5 The equation$E=mc^2$equates rest energy to mass. There is a third symbol in this equation that represents the speed of light, but this is a universal constant. One can always select physical units such that this constant attains value unity. Regardless the system of units selected, up to a numerical proportionality constant, the equation$E=mc^2$... 5 The relation$E=mc^2$only works for particles at rest, which is evidently not the case for photons. In the general case, the relation is $$E^2=m^2c^4+p^2c^2$$ for a particle with momentum$p$. (Note, though that the momentum is not necessarily$p=mv\$ as in the newtonian case! See for instance If photons have no mass, how can they have momentum?) For a ... Only top voted, non community-wiki answers of a minimum length are eligible
2013-12-19 08:37:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8811042904853821, "perplexity": 756.1298798783822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345762590/warc/CC-MAIN-20131218054922-00014-ip-10-33-133-15.ec2.internal.warc.gz"}
https://benvitalenum3ers.wordpress.com/2016/05/13/squarefree-semiprimes-abc-abbcca/
## Squarefree semiprimes (A,B,C); A+B,B+C,C+A Can you find squarefree semiprimes A, B, C such that each of A + B B + C C + A is a square number? Paul found: $14 + 86 \; = \; (2 \times 7) + (2 \times 43) \; = \; 10^2$ $86 + 35 \; = \; (2 \times 43) + (5 \times 7) \; = 11^2$ $14 + 35 \; = \; (2 \times 7) + (5 \times 7) \; = \; 7^2$ $26 + 74 \; = \; (2 \times 13) + (2 \times 37) \; = \; 10^2$ $74 + 95 \; = \; (2 \times 37) + (5 \times 19) \; = \; 13^2$ $26 + 95 \; = \; (2 \times 13) + (5 \times 19) \; = \; 11^2$ $38 + 106 \; = \; (2 \times 19) + (2 \times 53) \; = \; 12^2$ $106 + 218 \; = \; (2 \times 53) + (2 \times 109) \; = \; 18^2$ $38 + 218 \; = \; (2 \times 19) + (2 \times 109) \; = \; 16^2$ $38 + 158 \; = \; (2 \times 19) + (2 \times 79) \; = \; 14^2$ $158 + 803 \; = \; (2 \times 79) + (11 \times 73) \; = \; 31^2$ $38 + 803 \; = \; (2 \times 19) + (11 \times 73) \; = \; 29^2$ $106 + 218 \; = \; (2 \times 53) + (2 \times 109) \; = \; 18^2$ $218 + 623 \; = \; (2 \times 109) + (7 \times 89) \; = \; 29^2$ $106 + 623 \; = \; (2 \times 53) + (7 \times 89) \; = \; 27^2$ $118 + 206 \; = \; (2 \times 59) + (2 \times 103) \; = \; 18^2$ $206 + 323 \; = \; (2 \times 103) + (17 \times 19) \; = \; 23^2$ $118 + 323 \; = \; (2 \times 59) + (17 \times 19) \; = \; 21^2$ Advertisements ## About benvitalis math grad - Interest: Number theory This entry was posted in Number Puzzles and tagged . Bookmark the permalink. ### 2 Responses to Squarefree semiprimes (A,B,C); A+B,B+C,C+A 1. paul says: Here are a few 14 + 86 = 10^2 86 + 35 = 11^2 14 + 35 = 7^2 26 + 74 = 10^2 74 + 95 = 13^2 26 + 95 = 11^2 38 + 106 = 12^2 106 + 218 = 18^2 38 + 218 = 16^2 38 + 158 = 14^2 158 + 803 = 31^2 38 + 803 = 29^2 106 + 218 = 18^2 218 + 623 = 29^2 106 + 623 = 27^2 118 + 206 = 18^2 206 + 323 = 23^2 118 + 323 = 21^2 Paul
2017-11-19 18:00:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7061054706573486, "perplexity": 5046.811963771514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805708.41/warc/CC-MAIN-20171119172232-20171119192232-00218.warc.gz"}
https://www.gradesaver.com/textbooks/math/geometry/geometry-common-core-15th-edition/chapter-10-area-10-2-areas-of-trapezoids-rhombuses-and-kites-lesson-check-page-625/10
## Geometry: Common Core (15th Edition) No, you do not need to know the lengths of the sides to find the area of a kite. You only need to know the lengths of the diagonals. The formula for the area of a kite is A=$\frac{1}{2}$$d_{1}$$d_{2}$
2022-05-27 02:12:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5833483934402466, "perplexity": 191.32936497656462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662631064.64/warc/CC-MAIN-20220527015812-20220527045812-00535.warc.gz"}
https://codereview.stackexchange.com/questions/110032/calculator-using-tkinter
# Calculator using Tkinter So this is my first project. I made a Calculator using Tkinter. For the next version, I will try adding • oops concepts • Custom parser for input Here's the code #!/usr/bin/env python3.4 from tkinter import * import parser root = Tk() root.title('Calculator') i = 0 def factorial(): """Calculates the factorial of the number entered.""" whole_string = display.get() number = int(whole_string) fact = 1 counter = number try: while counter > 0: fact = fact*counter counter -= 1 clear_all() display.insert(0, fact) except Exception: clear_all() display.insert(0, "Error") def clear_all(): """clears all the content in the Entry widget""" display.delete(0, END) def get_variables(num): """Gets the user input for operands and puts it inside the entry widget""" global i display.insert(i, num) i += 1 def get_operation(operator): """Gets the operand the user wants to apply on the functions""" global i length = len(operator) display.insert(i, operator) i += length def undo(): """removes the last entered operator/variable from entry widget""" whole_string = display.get() if len(whole_string): ## repeats until ## now just decrement the string by one index new_string = whole_string[:-1] print(new_string) clear_all() display.insert(0, new_string) else: clear_all() display.insert(0, "Error, press AC") def calculate(): """ Evaluates the expression ref : http://stackoverflow.com/questions/594266/equation-parsing-in-python """ whole_string = display.get() try: formulae = parser.expr(whole_string).compile() result = eval(formulae) clear_all() display.insert(0, result) except Exception: clear_all() display.insert(0, "Error!") display = Entry(root, font = ("Calibri", 13)) display.grid(row = 1, columnspan = 6 , sticky = W+E) one = Button(root, text = "1", command = lambda : get_variables(1), font=("Calibri", 12)) one.grid(row = 2, column = 0) two = Button(root, text = "2", command = lambda : get_variables(2), font=("Calibri", 12)) two.grid(row = 2, column = 1) three = Button(root, text = "3", command = lambda : get_variables(3), font=("Calibri", 12)) three.grid(row = 2, column = 2) four = Button(root, text = "4", command = lambda : get_variables(4), font=("Calibri", 12)) four.grid(row = 3 , column = 0) five = Button(root, text = "5", command = lambda : get_variables(5), font=("Calibri", 12)) five.grid(row = 3, column = 1) six = Button(root, text = "6", command = lambda : get_variables(6), font=("Calibri", 12)) six.grid(row = 3, column = 2) seven = Button(root, text = "7", command = lambda : get_variables(7), font=("Calibri", 12)) seven.grid(row = 4, column = 0) eight = Button(root, text = "8", command = lambda : get_variables(8), font=("Calibri", 12)) eight.grid(row = 4, column = 1) nine = Button(root , text = "9", command = lambda : get_variables(9), font=("Calibri", 12)) nine.grid(row = 4, column = 2) cls = Button(root, text = "AC", command = clear_all, font=("Calibri", 12), foreground = "red") cls.grid(row = 5, column = 0) zero = Button(root, text = "0", command = lambda : get_variables(0), font=("Calibri", 12)) zero.grid(row = 5, column = 1) result = Button(root, text = "=", command = calculate, font=("Calibri", 12), foreground = "red") result.grid(row = 5, column = 2) plus = Button(root, text = "+", command = lambda : get_operation("+"), font=("Calibri", 12)) plus.grid(row = 2, column = 3) minus = Button(root, text = "-", command = lambda : get_operation("-"), font=("Calibri", 12)) minus.grid(row = 3, column = 3) multiply = Button(root,text = "*", command = lambda : get_operation("*"), font=("Calibri", 12)) multiply.grid(row = 4, column = 3) divide = Button(root, text = "/", command = lambda : get_operation("/"), font=("Calibri", 12)) divide.grid(row = 5, column = 3) pi = Button(root, text = "pi", command = lambda: get_operation("*3.14"), font =("Calibri", 12)) pi.grid(row = 2, column = 4) modulo = Button(root, text = "%", command = lambda : get_operation("%"), font=("Calibri", 12)) modulo.grid(row = 3, column = 4) left_bracket = Button(root, text = "(", command = lambda: get_operation("("), font =("Calibri", 12)) left_bracket.grid(row = 4, column = 4) exp = Button(root, text = "exp", command = lambda: get_operation("**"), font = ("Calibri", 10)) exp.grid(row = 5, column = 4) # sin, cos, log, ln undo_button = Button(root, text = "<-", command = undo, font =("Calibri", 12), foreground = "red") undo_button.grid(row = 2, column = 5) fact = Button(root, text = "x!", command = factorial, font=("Calibri", 12)) fact.grid(row = 3, column = 5) right_bracket = Button(root, text = ")", command = lambda: get_operation(")"), font =("Calibri", 12)) right_bracket.grid(row = 4, column = 5) square = Button(root, text = "^2", command = lambda: get_operation("**2"), font = ("Calibri", 10)) square.grid(row = 5, column = 5) root.mainloop() Any Suggestions on how could I improve upon it guys? Edit: https://github.com/prodicus/pyCalc ## Use an object oriented structure You mention this in your question. It definitely makes it easier to organize your code. I recommend reading this question: https://stackoverflow.com/q/17466561/7432 ## Don't use global imports PEP8 discourages global imports. Experience has shown that doing so leads to code that can be hard to maintain over time. Change this: from tkinter import * to this: import tkinter as tk This will require you to prefix tk. to all of the tk classes. This is a Good Thing. The Zen of Python tells us that explicit is better than implicit. For example: root = tk.Tk() ... display = tk.Entry(root, ...) ## Use a named font One of the really great features of Tkinter is the notion of "named fonts". Create a custom font, and use that for your widgets rather than hard-coding the font in each widget. If you decide to change the font later, you only have to change one line of code. As a plus, if you change the font at runtime (eg: give the user an "increase font / decrease font" menu item), all of the widgets that use this font will automatically and instantly change. import tkinter.font customFont = tkinter.font.Font(family=font="Calibri", size=12) one = Button(..., font=customFont) ## Create your buttons in a loop You are creating a bunch of numeric buttons that are nearly identical. I suggest creating them in a loop to cut down on the number of lines. For example: buttons =[] for i in range(0,10): button = tk.Button(root, text=str(i), font=customFont, command=lambda num=i: get_variables(num)) buttons.append(button) ## Separate layout from widget creation It is much easier to visualize the layout of your widgets if you separate the creation of the widgets from the layout of the widgets. Assuming you are creating your widgets in a loop, you can then lay them out easily in one clear block of code: buttons[1].grid(row=2, column=0) buttons[2].grid(row=2, column=1) buttons[3].grid(row=2, column=2) buttons[4].grid(row=3, column=0) buttons[5].grid(row=3, column=1) buttons[6].grid(row=3, column=2) buttons[7].grid(row=4, column=0) buttons[8].grid(row=4, column=1) buttons[9].grid(row=4, column=2) With this simple example it's not overly important, but this is a good habit to get into. This becomes more true when you have a complex layout with widgets that span various rows and columns. Weird factorial From a user-interface point of view, the factorial is very weird. All the others buttons just add the symbol to the display, but factorial actually evaluates the whole expression. The principle of last suprise states that if 9 buttons do a thing, the 10-th should either: 1. Behave similarly to them 2. Have a special mark on it to signify speciality. (On a side note making the AC, <- and = red really improves user experience in my opinion.) Constants If you use the same value many times, you should give it a name, so that modifying it is faster. You use ("Calibri", 12) 22 times in your code! What a boring day if you decide to make the font bigger! FONT = ("Calibri", 12) And then replace ("Calibri", 12) with FONT to allow for very fast code upgrades. • Thanks for that. Would you shed some light on what are the advantages of adding class'es to a GUI programs in general? – Tasdik Rahman Nov 6 '15 at 17:43 • @prodicus I am not expert in OO, but usually waiting some days gets good in depth answers, at first each question receives only appetizers. – Caridorc Nov 6 '15 at 17:45 • No problem. Do you know how to make an executable for windows machine out of this code if I am on Ubuntu? – Tasdik Rahman Nov 6 '15 at 17:56 • @prodicus please be careful with on-topic-ness. We review the code by saying what we know, not answer any question vaguely regarding the topic. – Caridorc Nov 6 '15 at 17:59 • Sorry for going off topic. Thanks for your inputs by the way. – Tasdik Rahman Nov 7 '15 at 0:38 Instead of repeating code a bunch: root.columnconfigure(0,pad=3) Surely you can do two loops: for i in range(4): It'd be even better if 4 and 3 were named constants that explained the seeming arbitrary values. Same with pad too. • Thanks for that. I noticed now, that I have an inconsistency in the interface. How should I refactor the factorial part? – Tasdik Rahman Nov 6 '15 at 17:40
2021-05-09 17:16:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39619314670562744, "perplexity": 10092.424908194453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989006.71/warc/CC-MAIN-20210509153220-20210509183220-00021.warc.gz"}
https://twiki.cern.ch/twiki/bin/view/Main/BprimeBackgroundEstimation
# Background Estimation 5 jets with $p_T$ > 30 GeV /$c^2$ and $|\eta|$ < 2.4 and a minimum of 3 of these jets with a CSV b-discriminant > 0.244 (loose cut) Of the 66,790,000 events, 273,723 events make these selection cuts. From these selected events, we performed the $\chi$ $^2$ test on the 5 jets in the event. The five jets are compared in such a way that one jet has its mass compared to the Higgs mass with an RMS of ~15 GeV /$c^2$, a dijet pair has its invariant mass compared to the Higgs mass with an RMS of ~14 GeV /c$^2$ (These RMS values were calculated from Monte Carlo). The remaining two jets are paired with either the single jet (merged) Higgs or the dijet (resolved) Higgs by comparing the possible mass differences of the pairings, looking to make the difference as close to 0 as possible. (The mass difference is defined as a unitless $\frac{m_1 - m_2}{m_1 + m_2}$). Each mass comparison has it's $\chi$ $^2$ value $\left(\frac{(Observed - Expected)^2}{(RMS)^2}\right)$ calculated, and the sum of these 3 values (the total $\chi ^2$ value) is assumed to have the correct jet configuration when it is at its lowest possible value (see $\chi ^2$ Correctness Study section of this page). For estimating background present in the comparison of these jets to the Higgs mass, mass windows above and below the accepted Higgs mass were selected to be the comparison masses in the $\chi ^2$ calculations. We selected 55, 64, 72, 81, 90, 98, 108, 117, 134, 142, 151, 160, 169, 177, 186, and 195 GeV /$c^2$ for these mass windows. After the $\chi ^2$ and mass difference (|mass diff| < 0.2) cuts were made, we implemented separate mass cuts on the total reconstructed b' mass (> 600 GeV /$c^2$, > 800 GeV /$c^2$, > 1000 GeV /$c^2$, and > 1200 GeV /$c^2$) from each event and the number of events selected for each cut at each given mass window were recorded. From these event acceptance numbers, we can estimate a number of expected background events in the Higgs mass window for mass comparisons. Combining the estimates from each of these mass cut plots, we can estimate how the background is effected by the rising mass of the b'. The plots are available below, first a raw plot of the event acceptance for each mass window, followed by the same plot with a fit line overlay and finally a zoomed in view of the fit plots. 1000 and 1200 GeV /$c^2$ showed lower acceptance at the initial mass windows of 55 and 64 GeV /$c^2$, so a gaussian curve with a trailing exponential was used because it fit the data better than a standard exponential (an example from 1000 GeV /$c^2$ plot is shown below). -- BrianAnthonyFrancisco - 2015-07-24 Topic attachments I Attachment History Action Size Date Who Comment png 1000GeV-fit.png r1 manage 15.4 K 2015-07-24 - 22:08 BrianAnthonyFrancisco png 1000GeV-fit_zoom.png r1 manage 14.0 K 2015-07-24 - 22:08 BrianAnthonyFrancisco png 1000GeV-gausfit_zoom.png r1 manage 15.1 K 2015-07-24 - 22:09 BrianAnthonyFrancisco png 1000GeV.png r1 manage 13.6 K 2015-07-24 - 22:08 BrianAnthonyFrancisco png 1200GeV-fit.png r1 manage 15.5 K 2015-07-24 - 22:09 BrianAnthonyFrancisco png 1200GeV-fit_zoom.png r1 manage 14.2 K 2015-07-24 - 22:09 BrianAnthonyFrancisco png 1200GeV.png r1 manage 13.1 K 2015-07-24 - 22:09 BrianAnthonyFrancisco png 600GeV-fit.png r1 manage 16.5 K 2015-07-24 - 22:08 BrianAnthonyFrancisco png 600GeV-fit_zoom.png r1 manage 15.6 K 2015-07-24 - 22:08 BrianAnthonyFrancisco png 600GeV.png r1 manage 14.8 K 2015-07-24 - 22:08 BrianAnthonyFrancisco png 800GeV-fit.png r1 manage 17.4 K 2015-07-24 - 22:08 BrianAnthonyFrancisco png 800GeV-fit_zoom.png r1 manage 15.1 K 2015-07-24 - 22:08 BrianAnthonyFrancisco png 800GeV.png r1 manage 15.3 K 2015-07-24 - 22:08 BrianAnthonyFrancisco png bprime1000.png r1 manage 12.7 K 2015-07-24 - 22:09 BrianAnthonyFrancisco png bprime1200.png r1 manage 12.7 K 2015-07-24 - 22:09 BrianAnthonyFrancisco png bprime600.png r1 manage 17.5 K 2015-07-24 - 22:09 BrianAnthonyFrancisco png bprime800.png r1 manage 14.4 K 2015-07-24 - 22:09 BrianAnthonyFrancisco Edit | Watch | Print version | History: r3 < r2 < r1 | Backlinks | Raw View | More topic actions Topic revision: r3 - 2015-08-04 - BrianAnthonyFrancisco Webs Welcome Guest Cern Search TWiki Search Google Search Main All webs Copyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors. Ideas, requests, problems regarding TWiki? Send feedback
2019-07-23 01:02:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7746726870536804, "perplexity": 4382.425583303784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528635.94/warc/CC-MAIN-20190723002417-20190723024417-00121.warc.gz"}
https://groups.google.com/g/sci.physics.research/c/mXVYph8L-Ns/m/dAMmm7rGVfIJ?hl=en
# Invariant Thinking 2 views ### James B. Glattfelder Jul 8, 2002, 10:39:25 PM7/8/02 to [The nature of this post is somewhat philosophical, so it probably will get filtered out by the moderators. Although, one could argue that science is slowly getting very close to epistemology and ontology (e.g. what is the nature space and time and such - or see the book: "Physics Meets Philosophy at the Planck Scale"). And obviously our philosophical make-up will have an impact on the way we mathematically try and grapple with these border-line issues.] [Moderator's note: This post is OK, but respondents are urged to confine themselves to physics aspects of the question. -MM] Arguably the most fruitful principle in physics has been the notion of symmetry. Covariance and gauge invariance - two simply stated symmetry conditions - are at the heart of GR and the SM. This is not only aesthetically pleasing it also illustrates a basic fact: in coding reality into a formal system, we should only allow the most minimal reference to be made to this formal system. I.e. reality likes to be translated into a language that doesn't explicitly depend on its own peculiarities (coordinates, number bases, units, ...). This is a pretty obvious idea and allows for physical laws to be universal. But what happens if we take this idea to the logical extreme? Will the ultimate theory of reality demand: I will only allow myself to be coded into a formal framework that makes *no* reference to itself whatsoever. Obviously a mind twister. But the question remains: what is the ultimate symmetry idea? Or: what is the ultimate invariant? (Does this imply "invariance" even with respect to our thinking?) How do we construct a system that supports itself out of itself, without relying on anything external? Can such a magical feat be performed by our thinking? The problems concerning quantum gravity could be seen to arise within the context of background dependence and in constructing background-free theories one is trying just such an ultimate "invariance trick"... ### James B. Glattfelder Jul 11, 2002, 10:20:55 PM7/11/02 to j_...@gmx.net (James B. Glattfelder) wrote in message news:<ed220336.02070...@posting.google.com>... I mean isn't this what we are trying to achieve, to reduce the ballast of our mathematical formalism to the minimum, so it can mirror a maximum of reality? Take category theory. Reduce mathematics to two abstractions: objects (stuff) and morphisms (relations of stuff) *without* impairing its power (e.g. topos theory). I love these words (from John's home page): > We can also express the principal of general covariance and the principal of > gauge-invariance most precisely by saying that observables are functorial. > So physicists should regard functoriality as mathematical for "able to be defined > without reference to a particular choice of coordinate system." I.e. another "invariance trick". ***** And what is the holographic principle telling us about reality? String/M-theory: "[the holographic principle is] the assertion that the number of possible states of a region of space is the same as that of a system of binary degrees of freedom distributed on the boundary of the region" (Susskind). A bulk theory of gravity is equivalent to a non-gravitating theory on the boundary, as in AdS/CFT. However, quantum gravity wants to go deeper: "The conclusion is that the [weak] holographic principle is not a relationship between two independent sets of concepts: bulk theories and measures of geometry vrs boundary theories and measures of information. Instead,it is the assertion that in a fundamental theory the set of concepts must be completely reduced to the second" (Smolin). And: "The familiar picture of bulk space-times with fields and geometry must emerge in the semiclassical limit, but these concepts can play no role in the fundamental theory." Again a distinction between reality, our fundamental description thereof and mathematics. Finally: "This is what we found about Nature's book keeping system: the data can be written onto a surface, and the pen with which the data are written has a finite size" (t'Hooft). Apart from its implications, this quantum information problem seems again to spell out the issues of symmetry/invariance: area and volume are not distinguishable concepts on a fundamental level. One is reminded of Gauss' theorem in vector analysis: all the information of the source (infinitesimal point) of a vector field is coded into any surface enclosing this source. So maybe the holographic principle is natures way of shielding itself from mathematical idealizations... ***** And while we are in these outlandish regions of abstract thought and fundamental reality, what about time? Obviously something very annoying in physics, because it appears so real to our senses, yet is so poorly implemented in theories. And just as we are getting used to mental pictures of higher dimensions, why not go for the ultimate nightmare: more than one time dimensions. To start, these ideas have (at least in two dimensions) been implemented in 12-dimensional F-theory and (implicitly) in the transactional interpretation of quantum mechanics (Wheeler-Feynman Absorber theory). While the first explicitly introduces two degrees of freedom, the second allows for a propagation from the the future to the past (advanced wave solutions), i.e. invoking the notion of two time directions one one time axis. One can always argue that, of course, we psychologically only perceive one time dimension directed into the future, but maybe our mathematical description of reality needs this additional degrees of freedom to work. Einstein spent the last years of his life trying to unite electrodynamics with gravity, without real success (although, not much fuss, pulls this trick by allowing for more degrees of freedom to his theory: (1,4) space-time dimensions (hence starting the higher dimensions thing). So why not promote t to a tensor, or at least a 3-vector;-) You can always draw simple diagrams with two time axes and one space axis and think about going from a worldline to a "temporal" world-sheet. And if you really feel like it you can try all those QM puzzles with this new idea (double slit: same particle located at two different points in time, or whatever). I think I just got a headache... Jul 13, 2002, 11:16:23 PM7/13/02 to On Tue, 9 Jul 2002 02:39:25 GMT, j_...@gmx.net (James B. Glattfelder) wrote: >But what happens if we take this idea to the logical extreme? Will the >ultimate theory of reality demand: I will only allow myself to be >coded into a formal framework that makes *no* reference to itself >whatsoever. I think one should avoid all extremes. The devil is always in the details, and for anything good to function - the details must not be too simple. Bohm, for instance, in his early book "Causality and Chance in Modern Physics" stressed that there are levels of description, and that these levels are not always compatible and/or reducible to each other. Every bright idea in ophysics seems to have its limits. To understand why is it so, we would have to discuss the question: "what are ideas and where do they come from?" - but this is not the right place for this kind of inquiry (at least not yet). Therefore let me just point it out that there is something important missing in the way you stated your question, and this something is the "quantum factor" or, better, the principle of "partnership" that J.A. Wheeler described visually with his U picture - where on one end is the "creation" and on the other end is the "observation" that makes the "virtual creation" real. It seems that this kind of duality in the laws of physics is necessary for the universe to "work". You mntioned symmetry principles: coordinate invariance and gauge invariance. They are useful principles but, perhaps, only to some degree. Perfect symmetry need not be the best solution. Physicists get much more from reseraching symmetry breaking mechanisms. Not always they are able to explain the detailed mechanisms - so they created the concept of "spontaneous symmetry breaking". Perfect symmetry (diffemorphism invariance and gauge invariance) hold, perhaps, in an ideal world. In our "sample" these symmetries may well be broken. To answer your question, to make a step in this direction, we will probably need to know more about complexity and how physics attempts to analyze (perhaps infinitely) complex universe in relatively simple terms, and what is the price that must be paid for it. Gell-Mann and Hartle introduced the term IGUS-es - information gathering and utilizing systems. Perhaps the physics of XXI century will have to pay more attention to these ideas of Wheeler, Gell-Mann and Hartle, Wolfram and others. In our paper (Blanchard and Jadczyk, Ann. der Phys. 4 (1995) 583-599) we wrote "So, how can we manipulate states without being able to manipulate Hamiltonians? We can only guess what could be the answer of other interpretations of Quantum Theory. Our answer is: we have some freedom in manipulating $C$ and $V$. We can not manipulate dynamics, but binamics is open. It is through $V$ and $C$ that we can feedback the processed information and knowledge - thus our approach seems to leave comfortable space for IGUS-es. In other words, although we can exercise little if any influence on the continuous, deterministic evolution\footnote{ Probably the influence through the damping operators $\Lambda_\alpha$ is negligible in normal circumstances}, we may have partial freedom of intervening, through $C$ and $V$, at bifurcation points, when die tossing takes place. It may be also remarked that the fact that more information can be used than is contained in master equation of standard quantum theory, may have not only engineering but also biological significance. " The concept of "information" will be, perhaps, important in what you call "ultimate theory of reality". ark -- http://www.cassiopaea.org/quantum_future/homepage.htm -- ### Uncle Al Jul 14, 2002, 11:50:14 AM7/14/02 to "James B. Glattfelder" wrote: [snip] > Arguably the most fruitful principle in physics has been the notion of > symmetry. Covariance and gauge invariance - two simply stated symmetry > conditions - are at the heart of GR and the SM. This is not only > aesthetically pleasing it also illustrates a basic fact: in coding > reality into a formal system, we should only allow the most minimal > reference to be made to this formal system. I.e. reality likes to be > translated into a language that doesn't explicitly depend on its own > peculiarities (coordinates, number bases, units, ...). This is a > pretty obvious idea and allows for physical laws to be universal. The heart of physics is Noether's theorem: every symmetry is associated with a conserved quantitiy and the reverse. A physical system with a Lagrangian invariant with respect to the symmetry transformations of a Lie group has, in the case of a group with a finite (or countably infinite) number of independent infinitesimal generators, a conservation law for each such generator, and certain "dependencies" in the case of a larger infinite number of generators (General Relativity and the Bianchi identities). The reverse is true. A non-geometric fundamental theory of matter is unimaginable given our perception of symmetries and their consequences. Relativity removes all background coordinates. It models continuous spacetime, going beyond conformal symmetry (scale independence) to symmetry under all smooth coordinate transformations - general covariance (the stress-energy tensor embodying local energy and momentum) - resisting quantization. However, if reality is quantized then there is an intrinsic scale to things (e.g., the Planck length, 1.616x10^(-26) nm). This conflict remains unresolved. > But what happens if we take this idea to the logical extreme? Will the > ultimate theory of reality demand: I will only allow myself to be > coded into a formal framework that makes *no* reference to itself > whatsoever. Obviously a mind twister. But the question remains: what > is the ultimate symmetry idea? Or: what is the ultimate invariant? > (Does this imply "invariance" even with respect to our thinking?) How > do we construct a system that supports itself out of itself, without > relying on anything external? Can such a magical feat be performed by > our thinking? > > The problems concerning quantum gravity could be seen to arise within > the context of background dependence and in constructing > background-free theories one is trying just such an ultimate > "invariance trick"... John Baez is our doyen of fundamental mathematization of physics. Neither he nor any of his ilk can bell the cat. This is not for lack of ability, but for lack of constraint. Theory gropes hopelessly and abundantly without observation to guide it to physical solutions. Contemporary physics is experiment driven by theory. It is incredibly hobbled by being forced to look "in the right places." Serendipity is looking for a needle in a haystack and finding the farmer's daughter. Relativity and quantization are contradictory and incompatible. One or both must be incomplete in the manner that Newton fell to both. Empirical anomalies must exist! The link below explores an unambiguous test in existing apparatus that addresses both the validity of a General Relativity founding postulate and the concept of "point phenomenon" in physics as a whole. Somebody should look. One way or another we will then move forward. -- Uncle Al http://www.mazepath.com/uncleal/eotvos.htm (Toxic URL! Unsafe for children and most mammals) "Quis custodiet ipsos custodes?" The Net! ### James B. Glattfelder Aug 5, 2002, 10:07:31 PM8/5/02 to j_...@gmx.net (James B. Glattfelder) wrote in message news:<ed220336.02070...@posting.google.com>... This thread started with going on about the question of the relation of our thinking with respect to reality. E.g. the usefulness of mathematical frameworks that exhibit a minimal amount of referencing (i.e. maximally invariant) to express universal features. But is this all? Is the feature of analytical coding of reality the only formal possibility to grapple with reality? Until recently I would have suggested a "yes", but something made me rethink. Although I personally thought of things such as object-oriented structures as being a very powerful approach to problem-solving, I never gave it any thought beyond being a pragmatic engineering tool. (Here one could already emphasize that OO programming is an implementation of systems theoretic thinking and that computer languages in general hold linguistic issues.) But I never conceived of computation per se as a framework to formally deal with the workings of reality. However, looking at the contents of "A New Kind of approach to complex phenomenon (although Wolfram also claims to shed light on issues of fundamental physics as well, next to much more). It appears as though the whole industry of mathematics can be seen to spring from this foundation of computation. Assertions like "category theory can be viewed as a formalization of operations on abstract data types in computer languages" (p. 1154) are only the tip of the iceberg: "This emphasis on theorems has also led to a focus on equations that statically state facts rather than on rules that define actions, as in most of the systems in this book. But despite all these issues, many mathematicians implicitly tend to assume that somehow mathematics as it is practiced is universal, and that any possible abstract system will be covered by some area of mathematics or another. The results of this book, however, make it quite clear that this is not the case, and that in fact traditional mathematics has reached only a tiny fraction of all the kinds of abstract systems that can in principle be studied" (p. 860). If verified this would really shift a paradigm I held, that mathematical models are the best probe of reality. I liked the image of a map labeled "mathematics" with an arrow pointing to some region declaring "you are here", meaning that our reality would be described by this kind of mathematics. So perhaps the map should be called "computational possibilities"...
2021-08-02 15:52:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6832729578018188, "perplexity": 2890.833331721925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154321.31/warc/CC-MAIN-20210802141221-20210802171221-00012.warc.gz"}
https://proofwiki.org/wiki/Straight_Line_Commensurable_with_Bimedial_Straight_Line_is_Bimedial_and_of_Same_Order
# Straight Line Commensurable with Bimedial Straight Line is Bimedial and of Same Order ## Theorem In the words of Euclid: A straight line commensurable in length with a bimedial straight line is itself also bimedial and the same in order. ## Proof Let $AB$ be bimedial. Let $CD$ be commensurable in length with $AB$. It is to be shown that $CD$ is bimedial, and that the order of $CD$ is the same as the order of $AB$. Let $AB$ be divided into its medials by $E$. Let $AE$ be the greater medial. By definition, $AE$ and $EB$ are medial straight lines which are commensurable in square only. Using Proposition $12$ of Book $\text{VI}$: Construction of Fourth Proportional Straight Line, let it be contrived that: $AB : CD = AE : CF$ $EB : FD = AB : CD$ But $AB$ is commensurable in length with $CD$. $AE$ is commensurable in length with $CF$ and: $EB$ is commensurable in length with $FD$. But by hypothesis $AE$ and $EB$ are medial. $CF$ and $FD$ are medial. $AE : CF = EB : FD$ $AE : EB = CF : FD$ But by hypothesis $AE$ and $EB$ are commensurable in square only. $CF$ and $FD$ are commensurable in square only. But $CF$ and $FD$ are medial. Therefore, by definition, $CD$ is bimedial. It remains to be demonstrated that $CD$ is of the same order as $AB$. We have that: $AE : EB = CF : FD$ Therefore: $AE^2 : AE \cdot EB = CF^2 : CF \cdot FD$ $AE^2 : CF^2 = AE \cdot EB : CF \cdot FD$ But: $AE^2$ is commensurable with $CF^2$. Therefore $AE \cdot EB$ is commensurable with $CF \cdot FD$. Suppose $AB$ is a first bimedial. Then $AE \cdot EB$ is rational. It follows that $CF \cdot FD$ is rational. Thus by definition $CD$ is a first bimedial. Suppose otherwise that $AB$ is a second bimedial. Then $AE \cdot EB$ is medial. It follows that $CF \cdot FD$ is medial. Thus by definition $CD$ is a second bimedial. Thus $CD$ is of the same order as $AB$. $\blacksquare$ ## Historical Note This proof is Proposition $67$ of Book $\text{X}$ of Euclid's The Elements.
2023-03-29 21:52:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9321455955505371, "perplexity": 801.0236896468923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00739.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=141&t=19238
## Is E a path function or state function? $\Delta G^{\circ} = -nFE_{cell}^{\circ}$ Yuchien Ma 2L Posts: 46 Joined: Wed Sep 21, 2016 2:58 pm ### Is E a path function or state function? Is E a path function or state function? Tiffany_Luu_1F Posts: 10 Joined: Wed Sep 21, 2016 2:58 pm ### Re: Is E a path function or state function? I think the internal energy U is a state function so it does not depend on the path but I think E is a path function. stephanieyang_3F Posts: 62 Joined: Wed Sep 21, 2016 2:55 pm ### Re: Is E a path function or state function? E cell isn't a state function. This can be explained by the solution of 14.27 in the homework. When you try to add the half reactions, you can't simply add the E cell, but you can find out the total E cell by adding their Gibbs free energies because the change in Gibbs free energy is a state function. So yes E cell is a path function.
2019-10-19 23:30:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.733291745185852, "perplexity": 1080.076467284515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986700435.69/warc/CC-MAIN-20191019214624-20191020002124-00510.warc.gz"}
http://tug.org/pipermail/macostex-archives/2012-June/049257.html
# [OS X TeX] trouble with bold in fouriernc Leo Alonso leo.alonso at usc.es Fri Jun 8 10:18:40 CEST 2012 Hi all. I am using the "fouriernc" package on a mac with Lion, TeXLive 2011, TeXShop, and everything up to date. I am having trouble to get a \Gamma symbol in bold. \mathbf{}, or \mathsymbol{} do not work. I've tried to add a line \usepackage[T1]{fontenc} and also tried the package "bm". The only thing that works is \pbm{} with dreadful results. According to: http://milde.users.sourceforge.net/LUCR/Math/mathpackages/fouriernc-symbols.pdf the bold uppercase gamma is included, and in fact, everything works fine when I remove fouriernc. Do any of you have some suggestion?
2017-10-21 03:26:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9148258566856384, "perplexity": 4608.4139889749695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824543.20/warc/CC-MAIN-20171021024136-20171021044136-00122.warc.gz"}
http://stats.stackexchange.com/questions/34782/clustered-standard-errors-in-2-period-dif-in-dif
# Clustered standard errors in 2-period Dif-in-Dif? in order to rectify invalid t-stats because of autocorrelation in Difference-in-Differences (DnD) models, Duflo et al (2004) propose (among other solutions) to collapse data so as to have a before-after DnD. My question: What if we already HAVE a before-after setting (e.g. pre-reform vs post-reform)? Would it be still ok to use clustered SE (provided the number of clusters would be high enough)? Or is it automatically wrong to use them EVEN IF we already have data that has 2 time periods only. - Bertrand et al. propose collapsing so that you don't have to cluster your standard errors. Just use a common t-statistic comparing two groups with unequal variances for your test: $$\begin{equation*} \frac{\bar{y}_1 - \bar{y}_0}{\sqrt{\frac{\hat \sigma^2_0}{N_0} + \frac{\hat \sigma^2_1}{N_1}}} \end{equation*}$$ The above works when you have a balanced panel with all individuals experiencing the intervention at the same time. If this is not the case, perform the following regression. For each individual, get a pre-period average of the outcome $y$, $\bar{y}_{i0}$ and similarly a post-period average $\bar{y}_{i1}$. Now, for each individual, there are two observations: before and after. Turn this into a panel with $2N$ rows, where $N$ is the number of individuals. Run a regression on these data, including individual fixed effects and a treatment indicator.
2013-12-04 22:39:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.564323365688324, "perplexity": 951.9203156199773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163037829/warc/CC-MAIN-20131204131717-00098-ip-10-33-133-15.ec2.internal.warc.gz"}
http://stackoverflow.com/questions/15276219/can-i-calculate-error-introduced-by-doubles
# Can I calculate error introduced by doubles? Suppose I have an irrational number like \sqrt{3}. As it is irrational, it has no decimal representation. So when you try to express it with a IEEE 754 double, you will introduce an error. A decimal representation with a lot of digits is: 1.7320508075688772935274463415058723669428052538103806280558069794519330169088 00037081146186757248575675... Now, when I calculate \sqrt{3}, I get 1.732051: #include <stdio.h> // printf #include <math.h> // needed for sqrt int main() { double myVar = sqrt (3); printf("as double:\t%f\n", myVar); } According to Wolfram|Alpha, I have an error of 1.11100... × 10^-7. Is there any way I can calculate the error myself? (I don't mind switching to C++, Python or Java. I could probably also use Mathematica, if there is no simple alternative) Just to clarify: I don't want a solution that works only for sqrt{3}. I would like to get a function that gives me the error for any number. If that is not possible, I would at least like to know how Wolfram|Alpha gets more values. # My try While writing this question, I found this: #include <stdio.h> // printf #include <math.h> // needed for sqrt #include <float.h> // needed for higher precision int main() { long double r = sqrtl(3.0L); printf("Precision: %d digits; %.*Lg\n",LDBL_DIG,LDBL_DIG,r); } With this one, I can get the error down to 2.0 * 10^-18 according to Wolfram|Alpha. So I thought this might be close enough to get a good estimation of the error. I wrote this: #include <stdio.h> // printf #include <math.h> // needed for sqrt #include <float.h> int main() { double myVar = sqrt (3); long double r = sqrtl(3.0L); long double error = abs(r-myVar) / r; printf("Double:\t\t%f\n", myVar); printf("Precision:\t%d digits; %.*Lg\n",LDBL_DIG,LDBL_DIG,r); printf("Error:\t\t%.*Lg\n", LDBL_DIG, error); } But it outputs: Double: 1.732051 Precision: 18 digits; 1.73205080756887729 Error: 0 How can I fix that to get the error? - So are you asking two questions in one here? How you calculate the error in your answer, and also what's wrong with the second block of code? –  Mike Mar 7 at 16:22 @Mike: Yes, I ask two questions. I have made the second one more precise. So if the second one is answered, the first one is automatically answered. If the first one gets an answer, I don't need an answer for the second one. –  moose Mar 7 at 16:36 What every Programmer should know about Floating Point Arithmetic by Goldberg is the definite guide you are looking for. https://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/02Numerics/Double/paper.pdf - Fixed the last name. –  Alexey Frunze Mar 7 at 22:02 printf rounds doubles to 6 places when you use %f without a precision. e.g. double x = 1.3; long double y = 1.3L; long double err = y - (double) x; printf("Error %.20Lf\n", err); My output: -0.00000000000000004445 If the result is 0, your long double and double are the same. - In line three: Why do you cast x to double? It is a double, so (double) doesn't change anything, does it? –  moose Mar 9 at 8:01 @moose: Just to be explicit about what's happening. –  teppic Mar 9 at 11:26 You have a mistake in printing Double: 1.732051 here printf("Double:\t\t%f\n", myVar); The actual value of double myVar is 1.732050807568877281 //18 digits so 1.732050807568877281-1.732050807568877281 is zero - @AlexeyFrunze It really doesn't matter... but your value is incorrect at 16th digit, mine is at 17th. wolframalpha.com/input/?i=sqrt+3 so... :) there are inf digits –  user1944441 Mar 7 at 22:24 Inf in sqrt(3), but not in the floating-point variable. –  Alexey Frunze Mar 7 at 22:39 What? really? :-O That changes everything. –  user1944441 Mar 7 at 22:40 According to the C standard printf("%f", d) will default to 6 digits after the decimal point. This is not the full precision of your double. It might be that double and long double happen to be the same on your architecture. I have different sizes for them on my architecture and get a non-zero error in your example code. - You want fabsl instead of abs when calculating the error, at least when using C. (In C, abs is integer.) With this substitution, I get: Double: 1.732051 Precision: 18 digits; 1.73205080756887729 Error: 5.79643049346087304e-17 (Calculated on Mac OS X 10.8.3 with Apple clang 4.0.) Using long double to estimate the errors in double is a reasonable approach for a few simple calculations, except: • If you are calculating the more accurate long double results, why bother with double? • Error behavior in sequences of calculations is hard to describe and can grow to the point where long double is not providing an accurate estimate of the exact result. • There exist perverse situations where long double gets less accurate results than double. (Mostly encountered when somebody constructs an example to teach students a lesson, but they exist nonetheless.) In general, there is no simple and efficient way to calculate the error in a floating-point result in a sequence of calculations. If there were, it would be effectively a means of calculating a more accurate result, and we would use that instead of the floating-point calculations alone. In special cases, such as when developing math library routines, the errors resulting from a particular sequence of code are studied carefully (and the code is redesigned as necessary to have acceptable error behavior). More often, error is estimated either by performing various “experiments” to see how much results fluctuate with varying inputs or by studying general mathematical behavior of systems. You also asked “I would like to get a function that gives me the error for any number.” Well, that is easy, given any number x and the calculated result x', the error is exactly x'x. The actual problem is you probably do not have a description of x that can be used to evaluate that expression easily. In your example, x is sqrt(3). Obviously, then, the error is sqrt(3) – x, and x is exactly 1.732050807568877193176604123436845839023590087890625. Now all you need to do is evaluate sqrt(3). In other words, numerically evaluating the error is about as hard as numerically evaluating the original number. Is there some class of numbers you want to perform this analysis for? Also, do you actually want to calculate the error or just a good bound on the error? The latter is somewhat easier, although it remains hard for sequences of calculations. For all elementary operations, IEEE 754 requires the produced result to be the result that is nearest the mathematically exact result (in the appropriate direction for the rounding mode being used). In round-to-nearest mode, this implies that each result is at most 1/2 ULP (unit of least precision) away from the exact result. For operations such as those found in the standard math library (sine, logarithm, et cetera), most libraries will produce results within a few ULP of the exact result. - One way to obtain an interval that is guaranteed to contain the real value of the computation is to use interval arithmetic. Then, comparing the double result to the interval tells you how far the double computation is, at worst, from the real computation. Frama-C's value analysis can do this for you with option -all-rounding-modes. double Frama_C_sqrt(double x); double sqrt(double x) { return Frama_C_sqrt(x); } double y; int main(){ y = sqrt(3.0); } Analyzing the program with: frama-c -val t.c -float-normal -all-rounding-modes [value] Values at end of function main: y ∈ [1.7320508075688772 .. 1.7320508075688774] This means that the real value of sqrt(3), and thus the value that would be in variable y if the program computed with real numbers, is within the double bounds [1.7320508075688772 .. 1.7320508075688774]. Frama-C's value analysis does not support the long double type, but if I understand correctly, you were only using long double as reference to estimate the error made with double. The drawback of that method is that long double is itself imprecise. With interval arithmetic as implemented in Frama-C's value analysis, the real value of the computation is guaranteed to be within the displayed bounds. - I remember a math package for C about 25+ years ago that used interval arithmetic. I wonder why that concept seems to have gone by the wayside? –  supercat Sep 3 at 16:53 @supercat In the context of static analysis, the popular and modern thing to do is work on relational domains, where the values of variables are assigned not only ranges but relations. As a general technique for computing approximation errors, it may have fallen out of fashion for the same kind of reason: it give safe but over-approximated bounds for expressions where sub-expressions are related, one useless but typical example being x - x. –  Pascal Cuoq Sep 3 at 17:46 Certainly I can appreciate that there are many cases where a pessimistic range-based evaluation would yield useless results, and I can see that x-x is a particularly nice example [without a means of ensuring that the two x values are equivalent, it expands to the range (min-max)..(max-min)]. Still, in many cases the goal is to know whether a result can be guaranteed to be above or below some value. Knowing that the entire range is above, or the entire range is below, may eliminate the need for more refined analysis. –  supercat Sep 3 at 18:21 BTW, am I the only guy who really misses 80-bit floating-point math? Even if such numbers generally got padded out to 16 bytes, they'd still for many purposes be better than the .NET Decimal type. –  supercat Sep 3 at 18:24 @supercat You might like this series of posts to verify that a IOCCC winning program does not have undefined behavior (posts in order from bottom to top): blog.frama-c.com/index.php?tag/donut And on the subject of 80-bit FP, I too find it extremely convenient and I am glad that history was such that we now have it in addition to standard single- and double-precision, instead of having only these. –  Pascal Cuoq Sep 3 at 18:28
2013-12-18 17:54:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.666660726070404, "perplexity": 1422.5860688650841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345759258/warc/CC-MAIN-20131218054919-00074-ip-10-33-133-15.ec2.internal.warc.gz"}
https://atcoder.jp/contests/cf16-final/tasks_print?lang=ja
A - Where's Snuke? 問題文 H 行、横 W 列のマス目があります。 この中から snuke という文字列を探し、列と行の番号を順に続けて出力してください。 制約 • 1≦H, W≦26 • S_{i,j} は小文字アルファベット(a-z)のみからなる長さ 5 の文字列である。 • 与えられる文字列のうち、ちょうど 1 つだけが snuke である。 入力 H W S_{1,1} S_{1,2} ... S_{1,W} S_{2,1} S_{2,2} ... S_{2,W} : S_{H,1} S_{H,2} ... S_{H,W} 出力 snuke という文字列が書かれているマスの列と行の番号を続けて出力せよ。 入力例 1 15 10 snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snuke snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake 出力例 1 H6 入力例 2 1 1 snuke 出力例 2 A1 Score : 100 points Problem Statement There is a grid with H rows and W columns. The square at the i-th row and j-th column contains a string S_{i,j} of length 5. The rows are labeled with the numbers from 1 through H, and the columns are labeled with the uppercase English letters from A through the W-th letter of the alphabet. Exactly one of the squares in the grid contains the string snuke. Find this square and report its location. For example, the square at the 6-th row and 8-th column should be reported as H6. Constraints • 1≦H, W≦26 • The length of S_{i,j} is 5. • S_{i,j} consists of lowercase English letters (a-z). • Exactly one of the given strings is equal to snuke. Input The input is given from Standard Input in the following format: H W S_{1,1} S_{1,2} ... S_{1,W} S_{2,1} S_{2,2} ... S_{2,W} : S_{H,1} S_{H,2} ... S_{H,W} Output Print the labels of the row and the column of the square containing the string snuke, with no space inbetween. Sample Input 1 15 10 snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snuke snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake snake Sample Output 1 H6 Sample Input 2 1 1 snuke Sample Output 2 A1 B - Exactly N points 問題文 ある年のCODE FESTIVALの決勝では N 問の問題が出題されました。 i (1≦i≦N) 番目の問題の配点は i 点です。 • 1≦N≦10^7 部分点 • 1≦N≦1000 を満たすデータセットに正解した場合は、200 点が与えられる。 • 追加制約のないデータセットに正解した場合は、上記とは別に 100 点が与えられる。 入力 N 出力 そのような集合が複数考えられる場合は、いずれを出力しても構わない。 入力例 1 4 出力例 1 1 3 4 番目の問題のみを解いた場合もちょうど 4 点が得られますが、1,3 番目の問題を解く方が配点の最大値が小さくなります。 入力例 2 7 出力例 2 1 2 4 \{3,4\} という集合も考えられます。 入力例 3 1 出力例 3 1 Score : 300 points Problem Statement The problem set at CODE FESTIVAL 20XX Finals consists of N problems. The score allocated to the i-th (1≦i≦N) problem is i points. Takahashi, a contestant, is trying to score exactly N points. For that, he is deciding which problems to solve. As problems with higher scores are harder, he wants to minimize the highest score of a problem among the ones solved by him. Determine the set of problems that should be solved. • 1≦N≦10^7 Partial Score • 200 points will be awarded for passing the test set satisfying 1≦N≦1000. • Additional 100 points will be awarded for passing the test set without additional constraints. Input The input is given from Standard Input in the following format: N Output Among the sets of problems with the total score of N, find a set in which the highest score of a problem is minimum, then print the indices of the problems in the set in any order, one per line. If there exists more than one such set, any of them will be accepted. Sample Input 1 4 Sample Output 1 1 3 Solving only the 4-th problem will also result in the total score of 4 points, but solving the 1-st and 3-rd problems will lower the highest score of a solved problem. Sample Input 2 7 Sample Output 2 1 2 4 The set \{3,4\} will also be accepted. Sample Input 3 1 Sample Output 3 1 C - Interpretation 問題文 ある星には M 種類の言語があり、1 \sim M の番号が付けられています。 この星のある年のCODE FESTIVALには星中から N 人の参加者が集まりました。 i (1≦i≦N) 人目の参加者は K_i 種類の言語 L_{i,1}, L_{i,2}, ..., L_{i,{}K_i} を話すことが出来ます。 ある 2 人は以下のいずれかの条件を満たすときに限り、コミュニケーションを取ることが出来ます。 • 2 人ともが話すことの出来る言語が存在する。 • ある人 X が存在して、 2 人ともが X とコミュニケーションを取ることが出来る。 このとき、N 人すべての参加者が他のすべての参加者とコミュニケーションを取ることが出来るかどうかを判定してください。 制約 • 2≦N≦10^5 • 1≦M≦10^5 • 1≦K_i≦M • K_iの総和≦10^5 • 1≦L_{i,j}≦M • L_{i,1}, L_{i,2}, ..., L_{i,{}K_i} は相異なる。 部分点 • N≦1000 かつ M≦1000 かつ K_iの総和≦1000 を満たすデータセットに正解した場合は、200 点が与えられる。 • 追加制約のないデータセットに正解した場合は、上記とは別に 200 点が与えられる。 入力 N M K_1 L_{1,1} L_{1,2} ... L_{1,{}K_1} K_2 L_{2,1} L_{2,2} ... L_{2,{}K_2} : K_N L_{N,1} L_{N,2} ... L_{N,{}K_N} 出力 N 人すべての参加者が他のすべての参加者とコミュニケーションを取ることが出来るなら YES を、そうでないなら NO を出力せよ。 入力例 1 4 6 3 1 2 3 2 4 2 2 4 6 1 6 出力例 1 YES • 1 と人 2:共通の言語 2 を話せます。 • 2 と人 3:共通の言語 4 を話せます。 • 1 と人 32 人とも人 2 とコミュニケーションを取ることができます。 • 3 と人 4:共通の言語 6 を話せます。 • 2 と人 42 人とも人 3 とコミュニケーションを取ることができます。 • 1 と人 42 人とも人 2 とコミュニケーションを取ることができます。 また、誰も話すことの出来ない言語が存在する可能性があることに注意してください。 入力例 2 4 4 2 1 2 2 1 2 1 3 2 4 3 出力例 2 NO Score : 400 points Problem Statement On a planet far, far away, M languages are spoken. They are conveniently numbered 1 through M. For CODE FESTIVAL 20XX held on this planet, N participants gathered from all over the planet. The i-th (1≦i≦N) participant can speak K_i languages numbered L_{i,1}, L_{i,2}, ..., L_{i,{}K_i}. Two participants A and B can communicate with each other if and only if one of the following conditions is satisfied: • There exists a language that both A and B can speak. • There exists a participant X that both A and B can communicate with. Determine whether all N participants can communicate with all other participants. Constraints • 2≦N≦10^5 • 1≦M≦10^5 • 1≦K_i≦M • (The sum of all K_i)≦10^5 • 1≦L_{i,j}≦M • L_{i,1}, L_{i,2}, ..., L_{i,{}K_i} are pairwise distinct. Partial Score • 200 points will be awarded for passing the test set satisfying the following: N≦1000, M≦1000 and (The sum of all K_i)≦1000. • Additional 200 points will be awarded for passing the test set without additional constraints. Input The input is given from Standard Input in the following format: N M K_1 L_{1,1} L_{1,2} ... L_{1,{}K_1} K_2 L_{2,1} L_{2,2} ... L_{2,{}K_2} : K_N L_{N,1} L_{N,2} ... L_{N,{}K_N} Output If all N participants can communicate with all other participants, print YES. Otherwise, print NO. Sample Input 1 4 6 3 1 2 3 2 4 2 2 4 6 1 6 Sample Output 1 YES Any two participants can communicate with each other, as follows: • Participants 1 and 2: both can speak language 2. • Participants 2 and 3: both can speak language 4. • Participants 1 and 3: both can communicate with participant 2. • Participants 3 and 4: both can speak language 6. • Participants 2 and 4: both can communicate with participant 3. • Participants 1 and 4: both can communicate with participant 2. Note that there can be languages spoken by no participant. Sample Input 2 4 4 2 1 2 2 1 2 1 3 2 4 3 Sample Output 2 NO For example, participants 1 and 3 cannot communicate with each other. D - Pair Cards 問題文 i 枚目のカードには整数 X_i が書かれています。 • 2 枚のカードに書かれた整数が同じである。 • 2 枚のカードに書かれた整数の和が M の倍数である。 ただし、1 枚のカードを複数の組で使うことはできないものとします。 • 2≦N≦10^5 • 1≦M≦10^5 • 1≦X_i≦10^5 入力 N M X_1 X_2 ... X_N 入力例 1 7 5 3 1 4 1 5 9 2 出力例 1 3 (3,2), (1,4), (1,9)3 組を作ることが出来ます。 (3,2), (1,1) のように組を作ることもできますが、これでは組の個数が最大とならないことに注意してください。 入力例 2 15 10 1 5 6 10 11 11 11 20 21 25 25 26 99 99 99 出力例 2 6 Score : 700 points Problem Statement Takahashi is playing with N cards. The i-th card has an integer X_i on it. Takahashi is trying to create as many pairs of cards as possible satisfying one of the following conditions: • The integers on the two cards are the same. • The sum of the integers on the two cards is a multiple of M. Find the maximum number of pairs that can be created. Note that a card cannot be used in more than one pair. • 2≦N≦10^5 • 1≦M≦10^5 • 1≦X_i≦10^5 Input The input is given from Standard Input in the following format: N M X_1 X_2 ... X_N Output Print the maximum number of pairs that can be created. Sample Input 1 7 5 3 1 4 1 5 9 2 Sample Output 1 3 Three pairs (3,2), (1,4) and (1,9) can be created. It is possible to create pairs (3,2) and (1,1), but the number of pairs is not maximized with this. Sample Input 2 15 10 1 5 6 10 11 11 11 20 21 25 25 26 99 99 99 Sample Output 2 6 問題文 りんごさんはクッキーを焼いています。 りんごさんははじめ、1 秒間に 1 枚のクッキーを焼くことができます。 りんごさんはクッキーを食べることができます。 まだ食べていないクッキーが全部で x 枚あるとき、りんごさんはそれらをすべて食べることにより、1 秒間に焼くことのできるクッキーの枚数がちょうど x 枚になります。 クッキーを一部だけ食べることはできず、食べるときはすべて食べなければなりません。 クッキーを食べるためには個数にかかわらず A 秒の時間がかかり、その間はクッキーを焼くことができません。 また、クッキーは 1 秒ごとに同時に焼きあがるため、例えば 0.5 秒で x/2 枚のクッキーを焼くというようなことはできません。 りんごさんは N 枚のクッキーをおばあさんにプレゼントしたいと思っています。 りんごさんがまだ食べていないクッキーを N 枚以上用意するためにかかる時間の最小値を求めてください。 • 1≦N≦10^{12} • 0≦A≦10^{12} • A は整数である。 部分点 • N≦10^6 かつ A≦10^6 を満たすデータセットに正解した場合は、500 点が与えられる。 • 追加制約のないデータセットに正解した場合は、上記とは別に 500 点が与えられる。 入力 N A 出力 りんごさんがまだ食べていないクッキーを N 枚以上用意するためにかかる時間の最小値を出力せよ。 入力例 1 8 1 出力例 1 7 • 1 秒後:1 枚のクッキーが焼きあがる。 • 2 秒後:1 枚のクッキーが焼きあがり、合計枚数が 2 枚となる。ここで、2 枚のクッキーをすべて食べる。 • 3 秒後:クッキーを食べ終わり、1 秒間に 2 枚のクッキーを焼くことができるようになる。 • 4 秒後:2 枚のクッキーが焼きあがる。 • 5 秒後:2 枚のクッキーが焼きあがり、合計枚数が 4 枚となる。 • 6 秒後:2 枚のクッキーが焼きあがり、合計枚数が 6 枚となる。 • 7 秒後:2 枚のクッキーが焼きあがり、合計枚数が 8 枚となる。 入力例 2 1000000000000 1000000000000 出力例 2 1000000000000 入力例 3 123456 7 出力例 3 78 Score : 1000 points Problem Statement Rng is baking cookies. Initially, he can bake one cookie per second. He can also eat the cookies baked by himself. When there are x cookies not yet eaten, he can choose to eat all those cookies. After he finishes eating those cookies, the number of cookies he can bake per second becomes x. Note that a cookie always needs to be baked for 1 second, that is, he cannot bake a cookie in 1/x seconds when x > 1. When he choose to eat the cookies, he must eat all of them; he cannot choose to eat only part of them. It takes him A seconds to eat the cookies regardless of how many, during which no cookies can be baked. He wants to give N cookies to Grandma. Find the shortest time needed to produce at least N cookies not yet eaten. Constraints • 1≦N≦10^{12} • 0≦A≦10^{12} • A is an integer. Partial Score • 500 points will be awarded for passing the test set satisfying N≦10^6 and A≦10^6. • Additional 500 points will be awarded for passing the test set without additional constraints. Input The input is given from Standard Input in the following format: N A Output Print the shortest time needed to produce at least N cookies not yet eaten. Sample Input 1 8 1 Sample Output 1 7 It is possible to produce 8 cookies in 7 seconds, as follows: • After 1 second: 1 cookie is done. • After 2 seconds: 1 more cookie is done, totaling 2. Now, Rng starts eating those 2 cookies. • After 3 seconds: He finishes eating the cookies, and he can now bake 2 cookies per second. • After 4 seconds: 2 cookies are done. • After 5 seconds: 2 more cookies are done, totaling 4. • After 6 seconds: 2 more cookies are done, totaling 6. • After 7 seconds: 2 more cookies are done, totaling 8. Sample Input 2 1000000000000 1000000000000 Sample Output 2 1000000000000 F - Road of the King 問題文 この国の王様である高橋くんは、N 個の町を M 日間かけて廻る出張を計画しています。計画では、町の列 c を決め、i (1≦i≦M) 日目には町 c_i へ行くことにしました。すなわち、i 日目には、今いる町から町 c_i へ移動します。ただし、今いる町が町 c_i であった場合は移動しません。高橋くんははじめ町 1 にいるものとします。 • 2≦N≦300 • 1≦M≦300 入力 N M 入力例 1 3 3 出力例 1 2 入力例 2 150 300 出力例 2 734286322 入力例 3 300 150 出力例 3 0 Score : 1000 points Problem Statement There are N towns in Takahashi Kingdom. They are conveniently numbered 1 through N. Takahashi the king is planning to go on a tour of inspection for M days. He will determine a sequence of towns c, and visit town c_i on the i-th day. That is, on the i-th day, he will travel from his current location to town c_i. If he is already at town c_i, he will stay at that town. His location just before the beginning of the tour is town 1, the capital. The tour ends at town c_M, without getting back to the capital. The problem is that there is no paved road in this kingdom. He decided to resolve this issue by paving the road himself while traveling. When he travels from town a to town b, there will be a newly paved one-way road from town a to town b. Since he cares for his people, he wants the following condition to be satisfied after his tour is over: "it is possible to travel from any town to any other town by traversing roads paved by him". How many sequences of towns c satisfy this condition? • 2≦N≦300 • 1≦M≦300 Input The input is given from Standard Input in the following format: N M Output Print the number of sequences of towns satisfying the condition, modulo 1000000007 (=10^9+7). Sample Input 1 3 3 Sample Output 1 2 As shown below, the condition is satisfied only when c = (2,3,1) or c = (3,2,1). Sequences such as c = (2,3,2), c = (2,1,3), c = (1,2,2) do not satisfy the condition. Sample Input 2 150 300 Sample Output 2 734286322 Sample Input 3 300 150 Sample Output 3 0 G - Zigzag MST 問題文 N 個の頂点からなるグラフがあり、頂点には 0~N-1 の番号が付けられています。辺はまだありません。 • A_i 番の頂点と B_i 番の頂点をつなぐ、重み C_i の無向辺を追加する。 • B_i 番の頂点と A_i+1 番の頂点をつなぐ、重み C_i+1 の無向辺を追加する。 • A_i+1 番の頂点と B_i+1 番の頂点をつなぐ、重み C_i+2 の無向辺を追加する。 • B_i+1 番の頂点と A_i+2 番の頂点をつなぐ、重み C_i+3 の無向辺を追加する。 • A_i+2 番の頂点と B_i+2 番の頂点をつなぐ、重み C_i+4 の無向辺を追加する。 • B_i+2 番の頂点と A_i+3 番の頂点をつなぐ、重み C_i+5 の無向辺を追加する。 • A_i+3 番の頂点と B_i+3 番の頂点をつなぐ、重み C_i+6 の無向辺を追加する。 • ... ただし、頂点番号は mod N で考えます。 たとえば、N 番とは 0 番のことであり、2N-1 番とは N-1 番のことです。 すべての辺を追加した後のグラフの最小全域木に含まれる辺の重みの和を求めて下さい。 制約 • 2≦N≦200,000 • 1≦Q≦200,000 • 0≦A_i,B_i≦N-1 • 1≦C_i≦10^9 入力 N Q A_1 B_1 C_1 A_2 B_2 C_2 : A_Q B_Q C_Q 入力例 1 7 1 5 2 1 出力例 1 21 入力例 2 2 1 0 0 1000000000 出力例 2 1000000001 入力例 3 5 3 0 1 10 0 2 10 0 4 10 出力例 3 42 Score : 1300 points Problem Statement We have a graph with N vertices, numbered 0 through N-1. Edges are yet to be added. We will process Q queries to add edges. In the i-th (1≦i≦Q) query, three integers A_i, B_i and C_i will be given, and we will add infinitely many edges to the graph as follows: • The two vertices numbered A_i and B_i will be connected by an edge with a weight of C_i. • The two vertices numbered B_i and A_i+1 will be connected by an edge with a weight of C_i+1. • The two vertices numbered A_i+1 and B_i+1 will be connected by an edge with a weight of C_i+2. • The two vertices numbered B_i+1 and A_i+2 will be connected by an edge with a weight of C_i+3. • The two vertices numbered A_i+2 and B_i+2 will be connected by an edge with a weight of C_i+4. • The two vertices numbered B_i+2 and A_i+3 will be connected by an edge with a weight of C_i+5. • The two vertices numbered A_i+3 and B_i+3 will be connected by an edge with a weight of C_i+6. • ... Here, consider the indices of the vertices modulo N. For example, the vertice numbered N is the one numbered 0, and the vertice numbered 2N-1 is the one numbered N-1. The figure below shows the first seven edges added when N=16, A_i=7, B_i=14, C_i=1: After processing all the queries, find the total weight of the edges contained in a minimum spanning tree of the graph. Constraints • 2≦N≦200,000 • 1≦Q≦200,000 • 0≦A_i,B_i≦N-1 • 1≦C_i≦10^9 Input The input is given from Standard Input in the following format: N Q A_1 B_1 C_1 A_2 B_2 C_2 : A_Q B_Q C_Q Output Print the total weight of the edges contained in a minimum spanning tree of the graph. Sample Input 1 7 1 5 2 1 Sample Output 1 21 The figure below shows the minimum spanning tree of the graph: Note that there can be multiple edges connecting the same pair of vertices. Sample Input 2 2 1 0 0 1000000000 Sample Output 2 1000000001 Also note that there can be self-loops. Sample Input 3 5 3 0 1 10 0 2 10 0 4 10 Sample Output 3 42 H - Tokaido 問題文 N 個のマスが一列に並んでおり、左から順に 1~N の番号が付けられています。 すぬけくんとりんごさんはこのマス目を使って、以下のようなボードゲームで遊ぼうとしています。 1. はじめに、すぬけくんがすべてのマスに 1 つずつ整数を書く。 2. 2 人のプレイヤーはそれぞれ 1 つずつ駒を用意し、すぬけくんは自分の駒をマス 1 に、りんごさんは自分の駒をマス 2 に置く。 3. 自分の駒が相手の駒より左にあるプレイヤーが駒を動かす。駒を動かす先は、今自分の駒が置かれているマスよりも右にあってかつ相手の駒が置かれていないマスでなければならない。 4. 3. を繰り返し、これ以上駒を動かすことができなくなるとゲームは終了となる。 5. ゲーム終了時までに自分の駒を置いたことのあるマスに書かれた整数の合計が、それぞれのプレイヤーのスコアとなる。 すぬけくんはすでにマス i (1≦i≦N-1) に整数 A_i を書きましたが、まだマス N には整数を書いていません。 すぬけくんは M 個の整数 X_1,X_2,...,X_M それぞれについて、その数をマス N に書いてゲームを行ったときに「(すぬけくんのスコア)ー(りんごさんのスコア)」がいくらになるのかを計算することにしました。 ただし、それぞれのプレイヤーは「(自分のスコア)ー(相手のスコア)」を最大化するように駒を動かすものとします。 制約 • 3≦N≦200,000 • 0≦A_i≦10^6 • A_i の総和は 10^6 以下である。 • 1≦M≦200,000 • 0≦X_i≦10^9 部分点 • M=1 を満たすデータセットに正解した場合は、700 点が与えられる。 • 追加制約のないデータセットに正解した場合は、上記とは別に 900 点が与えられる。 入力 N A_1 A_2 ... A_{N-1} M X_1 X_2 : X_M 出力 X_1 \dots X_MM 個の整数それぞれに対し、その数をマス N に書いたときの「(すぬけくんのスコア)ー(りんごさんのスコア)」を 1 行にひとつずつ出力せよ。 入力例 1 5 2 7 1 8 1 2 出力例 1 0 ゲームは下図のように進行します。Sはすぬけくんの駒、Rはりんごさんの駒を表しています。 スコアは 2 人とも 10 となり、「(すぬけくんのスコア)ー(りんごさんのスコア)」は 0 となります。 入力例 2 9 2 0 1 6 1 1 2 6 5 2016 1 1 2 6 出力例 2 2001 6 6 7 7 Score : 1600 points Problem Statement There are N squares in a row, numbered 1 through N from left to right. Snuke and Rng are playing a board game using these squares, described below: 1. First, Snuke writes an integer into each square. 2. Each of the two players possesses one piece. Snuke places his piece onto square 1, and Rng places his onto square 2. 3. The player whose piece is to the left of the opponent's, moves his piece. The destination must be a square to the right of the square where the piece is currently placed, and must not be a square where the opponent's piece is placed. 4. Repeat step 3. When the pieces cannot be moved any more, the game ends. 5. The score of each player is calculated as the sum of the integers written in the squares where the player has placed his piece before the end of the game. Snuke has already written an integer A_i into square i (1≦i≦N-1), but not into square N yet. He has decided to calculate for each of M integers X_1,X_2,...,X_M, if he writes it into square N and the game is played, what the value "(Snuke's score) - (Rng's score)" will be. Here, it is assumed that each player moves his piece to maximize the value "(the player's score) - (the opponent's score)". Constraints • 3≦N≦200,000 • 0≦A_i≦10^6 • The sum of all A_i is at most 10^6. • 1≦M≦200,000 • 0≦X_i≦10^9 Partial Scores • 700 points will be awarded for passing the test set satisfying M=1. • Additional 900 points will be awarded for passing the test set without additional constraints. Input The input is given from Standard Input in the following format: N A_1 A_2 ... A_{N-1} M X_1 X_2 : X_M Output For each of the M integers X_1, ..., X_M, print the value "(Snuke's score) - (Rng's score)" if it is written into square N, one per line. Sample Input 1 5 2 7 1 8 1 2 Sample Output 1 0 The game proceeds as follows, where S represents Snuke's piece, and R represents Rng's. Both player scores 10, thus "(Snuke's score) - (Rng's score)" is 0. Sample Input 2 9 2 0 1 6 1 1 2 6 5 2016 1 1 2 6 Sample Output 2 2001 6 6 7 7 I - Reverse Grid 問題文 H 行、横 W 列のマス目があり、i 行目の j 列目のマスには文字 S_{i,j} が書かれています。 すぬけくんはこのマス目に対して以下の 2 種類の操作を行うことが出来ます。 • 行リバース:行を 1 つ選び、その行をリバースする。 • 列リバース:列を 1 つ選び、その列をリバースする。 制約 • 1≦H,W≦200 • S_{i,j} は小文字アルファベット(a-z)である。 入力 H W S_{1,1}S_{1,2}...S_{1,W} S_{2,1}S_{2,2}...S_{2,W} : S_{H,1}S_{H,2}...S_{H,W} 入力例 1 2 2 cf cf 出力例 1 6 入力例 2 1 12 codefestival 出力例 2 2 Score : 1900 points Problem Statement Snuke has a grid with H rows and W columns. The square at the i-th row and j-th column contains a character S_{i,j}. He can perform the following two kinds of operation on the grid: • Row-reverse: Reverse the order of the squares in a selected row. • Column-reverse: Reverse the order of the squares in a selected column. For example, reversing the 2-nd row followed by reversing the 4-th column will result as follows: By performing these operations any number of times in any order, how many placements of characters on the grid can be obtained? Constraints • 1≦H,W≦200 • S_{i,j} is a lowercase English letter (a-z). Input The input is given from Standard Input in the following format: H W S_{1,1}S_{1,2}...S_{1,W} S_{2,1}S_{2,2}...S_{2,W} : S_{H,1}S_{H,2}...S_{H,W} Output Print the number of placements of characters on the grid that can be obtained, modulo 1000000007 (=10^9+7). Sample Input 1 2 2 cf cf Sample Output 1 6 The following 6 placements of characters can be obtained: Sample Input 2 1 12 codefestival Sample Output 2 2 J - Neue Spiel 問題文 N 行、横 N 列のマス目が書かれたボードと N \times N 枚のタイルがあります。 • 上辺の差込口:左から順に U1, U2, ..., UN • 下辺の差込口:左から順に D1, D2, ..., DN • 左辺の差込口:上から順に L1, L2, ..., LN • 右辺の差込口:上から順に R1, R2, ..., RN すぬけくんは、N \times N 枚のタイルを 1 枚ずつ差込口から差し込むことによって、各マスに 1 枚ずつタイルが置かれている状態にしようとしています。 ただし、差込口 Ui からはちょうど U_i 枚、差込口 Di からはちょうど D_i 枚、差込口 Li からはちょうど L_i 枚、差込口 Ri からはちょうど R_i 枚のタイルを差し込まなければなりません。 このような差し込み方が可能かどうかを判定してください。また、可能な場合は差し込む順番を出力してください。 制約 • 1≦N≦300 • U_i,D_i,L_i,R_i0 以上の整数である。 • U_i,D_i,L_i,R_i の和は N \times N と等しい。 部分点 • N≦40 を満たすデータセットに正解した場合は、2000 点が与えられる。 • 追加制約のないデータセットに正解した場合は、上記とは別に 100 点が与えられる。 入力 N U_1 U_2 ... U_N D_1 D_2 ... D_N L_1 L_2 ... L_N R_1 R_2 ... R_N 入力例 1 3 0 0 1 1 1 0 3 0 1 0 1 1 出力例 1 L1 L1 L1 L3 D1 R2 U3 R3 D2 入力例 2 2 2 0 2 0 0 0 0 0 出力例 2 NO Score : 2100 points Problem Statement Snuke has a board with an N \times N grid, and N \times N tiles. Each side of a square that is part of the perimeter of the grid is attached with a socket. That is, each side of the grid is attached with N sockets, for the total of 4 \times N sockets. These sockets are labeled as follows: • The sockets on the top side of the grid: U1, U2, ..., UN from left to right • The sockets on the bottom side of the grid: D1, D2, ..., DN from left to right • The sockets on the left side of the grid: L1, L2, ..., LN from top to bottom • The sockets on the right side of the grid: R1, R2, ..., RN from top to bottom Snuke can insert a tile from each socket into the square on which the socket is attached. When the square is already occupied by a tile, the occupying tile will be pushed into the next square, and when the next square is also occupied by another tile, that another occupying tile will be pushed as well, and so forth. Snuke cannot insert a tile if it would result in a tile pushed out of the grid. The behavior of tiles when a tile is inserted is demonstrated in detail at Sample Input/Output 1. Snuke is trying to insert the N \times N tiles one by one from the sockets, to reach the state where every square contains a tile. Here, he must insert exactly U_i tiles from socket Ui, D_i tiles from socket Di, L_i tiles from socket Li and R_i tiles from socket Ri. Determine whether it is possible to insert the tiles under the restriction. If it is possible, in what order the tiles should be inserted from the sockets? Constraints • 1≦N≦300 • U_i,D_i,L_i and R_i are non-negative integers. • The sum of all values U_i,D_i,L_i and R_i is equal to N \times N. Partial Scores • 2000 points will be awarded for passing the test set satisfying N≦40. • Additional 100 points will be awarded for passing the test set without additional constraints. Input The input is given from Standard Input in the following format: N U_1 U_2 ... U_N D_1 D_2 ... D_N L_1 L_2 ... L_N R_1 R_2 ... R_N Output If it is possible to insert the tiles so that every square will contain a tile, print the labels of the sockets in the order the tiles should be inserted from them, one per line. If it is impossible, print NO instead. If there exists more than one solution, print any of those. Sample Input 1 3 0 0 1 1 1 0 3 0 1 0 1 1 Sample Output 1 L1 L1 L1 L3 D1 R2 U3 R3 D2 Snuke can insert the tiles as shown in the figure below. An arrow indicates where a tile is inserted from, a circle represents a tile, and a number written in a circle indicates how many tiles are inserted before and including the tile. Sample Input 2 2 2 0 2 0 0 0 0 0 Sample Output 2 NO
2022-07-02 20:24:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4703545570373535, "perplexity": 7588.364228402865}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104204514.62/warc/CC-MAIN-20220702192528-20220702222528-00440.warc.gz"}
https://aviation.stackexchange.com/questions/8365/what-is-the-difference-between-best-rate-of-climb-and-maximum-rate-of-climb
# What is the difference between Best Rate of Climb and Maximum Rate of Climb? Is there a difference between Best Rate of Climb and Maximum Rate of Climb? From my research, best rate of climb trades ground distance for altitude (i.e. steeper climb, more altitude per unit time). Since an aircraft cannot climb faster than its max rate, these appear to be two different labels for the same concept. Am I missing some nuance between the two? Generally when GA pilots talk about climb performance we speak of two different airspeed values: Best Rate of Climb speed (Vy) gets you the most altitude per unit time (feet per minute). When you want to get to cruise altitude quickly for maximum efficiency you'll aim for the best rate of climb so you spend the least time at lower, less efficient altitudes. Best Angle of Climb speed (Vx) gets you the greatest altitude per unit of ground distance (feet per mile). When you've got a FAA-Standard 50-foot-tree at the departure end of the runway you'll aim for the best angle of climb to ensure you don't wind up in the tree. Those speeds are useful to us as pilots, but the exact rate of climb (feet-per-minute) for those speeds will vary: A fully loaded plane will climb more slowly than one that's just got the pilot and a few gallons of fuel on board, and that's where the "maximum rate of climb" enters into the discussion: Maximum rate of climb is the number of feet per minute you can get climbing at the "best rate of climb" airspeed. If someone is being sloppy in their usage "Maximum Rate of Climb" could mean "Best Rate of Climb" (pitch for Vy and you get what you get), but if you're being precise in your usage and really talking about the rate of climb it would mean the theoretical maximum rate of climb in feet per minute based on the current conditions and aircraft weight. Maximum rate of climb under a given set of conditions is useful information to know if you need to clear terrain at some point on your flight path and want to be sure you can climb fast enough to do so: If you're starting from sea level and need to clear a 5000 foot mountain that's 5 minutes away but your plane can't manage more than 500 feet per minute at best-rate-of-climb speed under the current conditions you'll need to reconsider your flight plan to either avoid the mountain or climb in a circle somewhere until you can clear it. • To help remember which is which, I imagine a chart where the x-axis is distance, and the y-axis is altitude. If distance (x) is a concern, then use Vx to minimize the x value for a given y. If reaching a given altitude (y) quickly is your primary concern, then use Vy. I imagine this is the origin of the notation as well. – Steve N Aug 27 '14 at 18:26 • An alternative memory device: the letter x has a lot of angles. – 200_success Aug 27 '14 at 21:24 • I think this answer would be better if best and max rate of climb were contrasted first, because that directly answers the question. Then perhaps describe best angle. – rbp Nov 20 '15 at 15:03 • @rbp I'm not sure how you would contrast them: "Maximum" and "Best" when talking about a rate of climb are, by definition, identical: Gaining the most amount of altitude for the least amount of some other resource (usually Time, Distance, or Fuel). The real question is What rate are you trying to optimize? (Feet-per-Minute, Feet-per-Mile, or Feet-per-Gallon) - optimizing each of these rates will give a different climb profile. – voretaq7 Nov 20 '15 at 23:30 Although voretaq7 has already nicely answered it, I wanted to present a picture worth thousand words. ## VX Best angle of climb speed The greatest gain in altitude over a given horizontal distance. VX is used to clear 50' obstacles and so forth. ## VY Best rate of climb speed The greatest gain in altitude over a given amount of time. VY is used on normal takeoffs and such. • I always liked this image. What's the difference between Vx and Vy? "At Vy you'll crash through the middle of the tower, but at Vx you'll just clip the antenna masts!" – voretaq7 Aug 27 '14 at 19:06 • Shouldn't the aircraft climbing at $V_y$ be higher? – user2168 Aug 31 '14 at 7:06 • user2168, I don't like this picture! Why the second aircraft is at the same height? Where is the "greatest altitude gain"? – Electric Pilot Sep 12 '18 at 14:53 Depends what is best in the particular case. Generally, best means highest climb speed, but there might be other things which should be optimized: • highest flight path angle: This might be desired to avoid noise on the ground, or to escape unfriendly fire near the airport. For propeller aircraft, the optimum lift coefficient is $$c_L = \frac{T\cdot\pi\cdot AR\cdot \epsilon}{4\cdot m\cdot g} + \sqrt{\left( \frac{T\cdot\pi\cdot AR\cdot \epsilon}{4\cdot m\cdot g}\right)^2 + \pi \cdot AR \cdot \epsilon \cdot c_{D0}}$$ • lowest fuel consumption per altitude gained: The highest climb speed is reached with full power, but maybe a lower power setting is more economical. In general, this is very close to the schedule with maximum climb speed. For propeller aircraft, the optimum lift coefficient for highest climb speed is the same as for minimum energy loss: $$c_L = \sqrt{3 \cdot \pi \cdot AR \cdot \epsilon \cdot c_{D0}}$$ • highest energy gain (the sum of both altitude and speed gain is optimized): This is desired when you want to reach a point far up in a hurry, like a supersonic interceptor would. The graph below shows lines of equal total energy (black, dashed) and of maximum altitude for given climb speeds over true airspeed (blue, solid). The red line connecting the tops of the blue lines gives the flight schedule for fastest altitude gain, and the green line cutting through the blue lines at their maximum of total energy gives the schedule for the total energy gain climb. The higher you fly, the wider is the difference in optimum speed between both. Nomenclature: $c_L \:\:\:$ lift coefficient $T \:\:\:\:$ thrust $m \:\:\:\:$ aircraft mass $g \:\:\:\:\:$ gravity $\pi \:\:\:\:\:$ 3.14159$\dots$ $AR \:\:$ aspect ratio of the wing $\epsilon \:\:\:\:\:$ the wing's Oswald factor $c_{D0} \:$ zero-lift drag coefficient • Peter Kampf: single-handedly proving math is used after high school! – CGCampbell Aug 27 '14 at 20:54 Simpel: Best and Maximum (Rate of Climb = ROC) are usually$^1$ used as synonyms. For completeness: There is also the AOC (Angle of Climb), which is just another quantity to describe your flightpath (the angle of your climb). In order to reach the maximum of: • ROC you have to maintain the airspeed $V_y$ • AOC you have to maintain the airspeed $V_x$ Here is a nice picture illustrating the two How to remember the two: x is before y in the alphabet. On take-off (for small planes at least), $V_x$ is used first because you want to climb as steep as possible to be free of any obstacles. Once you are free, you change to $V_y$ in order to climb faster. 1: I say "usually", because "best" may refer to something else, like "best in terms of fuel consumption", "best in terms of flying directly into the tree in front of us" etc. but then the meaning is unambigious VY is similar to a transport aircraft v2 which will also get you to 1500 feet above the ground in the least amount of time possible ,VY will get you up to a pre-determined altitude usually between 1000 to 1500 AGL. The objective of getting to 1000 to 1500 feet as soon as possible is at that altitude you have more altitude to maneuver if an emergency occurs (example an engine failure) after obtaining such an altitude VY should no longer be maintained and aircraft should be transitioned to a cruise climb to increase visibility and engine cooling usually about 500 feet a minute, I hope this helps anyone that was looking for a clear explanation ....Good luck and stay safe thanks Joe • A jet aircraft flying at V2 with all engines operating is way too slow. It is more closely related to Vxse. V2 and Vxse are not the same speeds however. – wbeard52 Oct 3 '16 at 3:42
2020-09-28 09:36:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5101466178894043, "perplexity": 1707.7721528386076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401598891.71/warc/CC-MAIN-20200928073028-20200928103028-00185.warc.gz"}
http://crypto.stackexchange.com/tags/openssl/hot?filter=month
# Tag Info 3 There is no difference. The wiki page you referred to contains examples of hashes for all three versions of Whirlpool. For string "The quick brown fox jumps over the lazy dog", the current version should produce the following hash: B97DE512E91E3828B40D2B0FDCE9CEB3C4A71F9BEA8D88E75C4FA854DF36725F ... 2 You're fine. There are several different padding methods listed in PKCS v1.5. The method that has active attacks is actually a padding used during public key encryption - that is, it's used to encode the plaintext message before handing it off to the RSA public function. We don't use that method to sign messages. For that matter, the attack model used ... 2 No, you do not add the ASN.1 encoding to the hash when generating an ECDSA signature. There are two reasons for this: The first is that there is no room, if we select a curve and a hash with equal security. To be secure against attacks that take $O(2^N)$ time, a curve needs to have a prime that's at least $2N$ bits; to be secure against collision attacks ... 1 A TLS session can be resumed once both sides know of the session. The exchange of the necessary information (i.e. session identifier or session ticket) is done within the initial handshake. This means a session can already be re-used within other connections once this initial handshake is done. How long the session information are kept and if they are ... 1 Ok. I think I will attempt answering this myself. Given that (at least on linux), perl, openssl have gone down the same path as the rhash author (I am not sure who in fact, implemented this first), the reason for a different digest, is that, due to restricting the input message from $2^{512}$ bits to $2^{64}$ bits max, the first $512$ rows of \$4 \times ... Only top voted, non community-wiki answers of a minimum length are eligible
2016-05-03 01:25:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43926677107810974, "perplexity": 1143.0507359859898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860118321.95/warc/CC-MAIN-20160428161518-00057-ip-10-239-7-51.ec2.internal.warc.gz"}
https://zbmath.org/?q=an:1163.90473
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Performance analysis and optimization of assemble-to-order systems with random lead times. (English) Zbl 1163.90473 Summary: We study a single-product assembly system in which the final product is assembled to order whereas the components (subassemblies) are built to stock. Customer demand follows a Poisson process, and replenishment lead times for each component are independent and identically distributed random variables. For any given base-stock policy, the exact performance analysis reduces to the evaluation of a set of $M/G/\infty$ queues with a common arrival stream. We show that unlike the standard $M/G/\infty$ queueing system, lead time (service time) variability degrades performance in this assembly system. We also show that it is desirable to keep higher base-stock levels for components with longer mean lead times (and lower unit costs). We derive easy-to-compute performance bounds and use them as surrogates for the performance measures in several optimization problems that seek the best trade-off between inventory and customer service. Greedy-type algorithms are developed to solve the surrogate problems. Numerical examples indicate that these algorithms provide efficient solutions and valuable insights to the optimal inventory/service trade-off in the original problems. ##### MSC: 90B30 Production models 90B22 Queues and service (optimization) Full Text:
2016-05-05 01:06:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6222202181816101, "perplexity": 6351.116494987964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860125750.3/warc/CC-MAIN-20160428161525-00187-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.groundai.com/project/measurement-of-the-wwwz-production-cross-section-using-a-matrix-element-technique-in-lepton-jets-events/
Measurement of the WW+WZ Production Cross Section Using a Matrix Element Technique in Lepton + Jets Events # Measurement of the WW+WZ Production Cross Section Using a Matrix Element Technique in Lepton + Jets Events ###### Abstract We present a measurement of the production cross section observed in a final state consisting of an identified electron or muon, two jets, and missing transverse energy. The measurement is carried out in a data sample corresponding to up to 4.6 fb of integrated luminosity at TeV collected by the CDF II detector. Matrix element calculations are used to separate the diboson signal from the large backgrounds. The cross section is measured to be  pb, in agreement with standard model predictions. A fit to the dijet invariant mass spectrum yields a compatible cross section measurement. ###### pacs: 14.80.Bn, 14.70.Fm, 14.70.Hp, 12.15.Ji CDF Collaboration222With visitors from University of Massachusetts Amherst, Amherst, Massachusetts 01003, Istituto Nazionale di Fisica Nucleare, Sezione di Cagliari, 09042 Monserrato (Cagliari), Italy, University of California Irvine, Irvine, CA 92697, University of California Santa Barbara, Santa Barbara, CA 93106 University of California Santa Cruz, Santa Cruz, CA 95064, CERN,CH-1211 Geneva, Switzerland, Cornell University, Ithaca, NY 14853, University of Cyprus, Nicosia CY-1678, Cyprus, University College Dublin, Dublin 4, Ireland, University of Fukui, Fukui City, Fukui Prefecture, Japan 910-0017, Universidad Iberoamericana, Mexico D.F., Mexico, Iowa State University, Ames, IA 50011, University of Iowa, Iowa City, IA 52242, Kinki University, Higashi-Osaka City, Japan 577-8502, Kansas State University, Manhattan, KS 66506, University of Manchester, Manchester M13 9PL, England, Queen Mary, University of London, London, E1 4NS, England, Muons, Inc., Batavia, IL 60510, Nagasaki Institute of Applied Science, Nagasaki, Japan, National Research Nuclear University, Moscow, Russia, University of Notre Dame, Notre Dame, IN 46556, Universidad de Oviedo, E-33007 Oviedo, Spain, Texas Tech University, Lubbock, TX 79609, IFIC(CSIC-Universitat de Valencia), 56071 Valencia, Spain, Universidad Tecnica Federico Santa Maria, 110v Valparaiso, Chile, University of Virginia, Charlottesville, VA 22906, Yarmouk University, Irbid 211-63, Jordan, On leave from J. Stefan Institute, Ljubljana, Slovenia, July 12, 2019 ## I Introduction Measurements of the production cross section of pairs of heavy gauge bosons test the electroweak sector of the standard model (SM). The production cross section can be enhanced by anomalous triple gauge boson interactions hagiwara () or from new particles decaying to pairs of vector bosons. In this paper, we describe the measurement of the production cross section in events containing a high- electron or muon and two hadronic jets. This event topology is expected when one boson in the event decays to an electron or muon and a neutrino, and the other or boson decays to two quarks (). We consider both the and processes as signal because our limited detector resolution of hadronic jets makes the separation of from impracticable. The leading-order and production diagrams are shown in Fig. 1. The predicted SM production cross sections at the Tevatron, calculated at next-to-leading order (NLO), are  pb and  pb VVtheory (). Both of these production cross sections have been measured previously at the Tevatron in channels in which both gauge bosons decay leptonically diblepCDF (); diblepD0 (), and no deviation between measurement and prediction has been observed. Hadronic decay modes have higher branching ratios than the leptonic decays, but the corresponding final states are exposed to large backgrounds. The first observation of diboson production at the Tevatron with a hadronic decay was achieved in events with two jets and large missing transverse energy at CDF metjets (). Evidence and observation of the process and decay discussed in this paper, , were previously reported by the D0 d0lvjj () and CDF ourPRL () collaborations. The observation reported by CDF used a matrix element technique relying on knowledge of the differential cross sections of signal and background processes to separate signal events from the background. The measurement of is relevant to the search for the Higgs boson at the Tevatron. One of the most powerful channels used in the search for a Higgs boson with a mass lower than 130 GeV/ is the channel in which the Higgs boson is produced in association with a boson, with the Higgs boson decaying to a pair of quarks and the boson decaying leptonically (). A similar matrix element analysis to the one presented in this paper is employed in the search at CDF WHME (). A well-established measurement of the channel gives us confidence in the similar techniques in the search for the Higgs boson. Similar issues in background modeling and systematic uncertainties are relevant for the two analyses. One important difference, however, is that the search for production uses methods to identify jets originating from b-quarks (Ò-taggingÓ), whereas the analysis presented in this paper does not use -tagging. This paper presents the details of the matrix element method used in the observation of , but applied to a larger data sample corresponding to up to 4.6 fb of integrated luminosity taken with the CDF II detector and with some changes in the event selection criteria. In particular, the event selection has been made more inclusive so that it resembles that used in the search more closely. The organization of the rest of this paper is as follows. Section II describes the apparatus used to carry out the measurement, while Section III describes the event selection and backgrounds. The modeling of the signal and background processes is discussed in Section IV. Section V contains the details of the matrix element technique used for the measurement. The systematic uncertainties and results are discussed in Sections VI and VII respectively. A fit to the dijet invariant mass spectrum, performed as a cross check, is presented in Section VIII. Finally, we summarize the conclusions in Section IX. ## Ii CDF II detector The CDF II detector is a nearly azimuthally and forward-backward symmetric detector designed to study collisions at the Tevatron. It is described in detail in Ref. CDFdet (). It consists of a charged particle tracking system surrounded by calorimeters and muon chambers. Particle positions and angles are expressed in a cylindric coordinate system, with the axis along the proton beam. The polar angle, , is measured with respect to the direction of the proton beam, and is the azimuthal angle about the beam axis. The pseudo-rapidity, , is defined as . The momentum of charged particles is measured by the tracking system, consisting of silicon strip detectors surrounded by an open-cell drift chamber, all immersed in a 1.4 T solenoidal magnetic field coaxial with the Tevatron beams. The silicon tracking system SVX () consists of eight layers of silicon covering the radial region from 1.5 cm to 28 cm from the beam axis. The drift chamber, or central outer tracker (COT) COT (), is composed of eight superlayers that alternate between axial and 2 degree stereo orientations. Each superlayer contains 12 sense wires. The COT covers the radial region from 40 cm to 137 cm and provides good tracking efficiency for charged particles out to . The tracking system is surrounded by calorimeters which measure the energies of electrons, photons, and jets of hadronic particles. The electromagnetic calorimeters use a scintillating tile and lead sampling technology, while the hadronic calorimeters are composed of scintillating tiles with steel absorber. The calorimeters are divided into central and plug sections. The central region, composed of the central electromagnetic (CEM) CEM () and central and end-wall hadronic calorimeters (CHA and WHA) CHAWHA (), covers the region . The end-plug electromagnetic (PEM) PEM () and end-plug hadronic calorimeters (PHA) extend the coverage to . The calorimeters have a component called the shower maximum (ShowerMax) ShowerMax () detector located at the depth in the calorimeter at which the electromagnetic shower is expected to be widest. The ShowerMax uses wire chambers and cathode strips to provide a precise position measurement for electromagnetic clusters. A muon system composed of planar multi-wire drift chambers records hits when charged particles pass through. Four different sections of the muon detector are used for the analysis presented here: the central muon detector (CMU) CMU (), the central muon upgrade (CMP), the central muon extension (CMX), and the barrel muon chambers (BMU). In the central region, , four layers of chambers located just outside of the calorimeter make up the CMU system; the CMU is surrounded by 60 cm of iron shielding and another four layers of chambers compose the CMP system. The CMX covers the region with , while the BMU extends the coverage to . Cherenkov luminosity counters (CLCs) CLC () measure the rate of inelastic collisions, which can be converted to an instantaneous luminosity. The integrated luminosity is calculated from the instantaneous luminosity measurements. The CLCs consist of gaseous Cherenkov counters located at high pseudo-rapidity, 3.64.6. The three-level trigger system at CDF is used to reduce the event rate from 1.7 MHz to about 150 Hz. The first level uses hardware, while the second is a mixture of hardware and fast software algorithms XFT (). The software-based third-level trigger makes use of detailed information on the event, very similar to that available offline. ## Iii Candidate Event Selection and Backgrounds The event selection can be divided into a baseline selection corresponding to the topology of our signal, and a variety of vetoes that are imposed to remove backgrounds. The baseline selection, the relevant backgrounds, and the vetoes are all described in more detail below. A few relevant quantities for the event selection are defined here. The transverse momentum of a charged particle is , where is the momentum of the charged particle track. The analogous quantity measured with calorimeter energies is the transverse energy, . The missing transverse energy, , is defined by , where is a unit vector perpendicular to the beam axis and pointing at the calorimeter tower. is corrected for high-energy muons as well as for factors applied to correct hadronic jet energies. We define . Jets are clustered using a cone algorithm, with a fixed cone size in which the center of the jet is defined as () and the size of the jet cone as . ### iii.1 Baseline event selection Figure 2 shows the decay topology that is considered as the signal in this analysis. The final state contains a charged lepton, a neutrino, and two quarks. We focus on events in which the charged lepton is an electron or muon. Events in which the boson decays to a lepton may also be considered part of the signal if a leptonic decay results in an isolated electron or muon. The neutrino passes through the detector without depositing energy; its presence can be inferred in events with . The two quarks will hadronize to form collimated jets of hadrons. As a result, our baseline event selection requires events to contain one high- electron or muon, significant , and two jets. Several triggers are used to collect events for this analysis. Roughly half of the events are selected with a trigger requiring a high- central electron in the CEM ( GeV, ). Two muon triggers, one requiring hits in both the CMP and CMU and the other requiring hits in the CMX, collect events with central muons ( GeV/, ). Finally, a trigger path requiring large and two jets is used to collect events with muons that were not detected by the central muon triggers. The plus jets trigger requires  GeV and two jets with  GeV. The jet and used in the trigger selection are not corrected for detector or physics effects. Further selection criteria are imposed on triggered events offline. Electron (muon) candidates are required to have  GeV ( GeV/). They must fulfill several other identification criteria designed to select pure samples of high- electrons (muons) LepSel (), including an isolation requirement that the energy within a cone of around the lepton axis is less than 10% of the () of the electron (muon). The jet energies are corrected for detector effects jet_details (). We require the highest- jet in the event to have  GeV and the second highest- jet in the event to have  GeV. Finally, we require 20 GeV. Some criteria are imposed specifically on events collected by the plus jets trigger to ensure a high efficiency. We require that the two jets are sufficiently separated, , that one of the jets is central, , and that the transverse energy of both jets is larger than 25 GeV. Even after these cuts, this trigger path is not fully efficient, which is taken into account by a correction curve as a function of . ### iii.2 Backgrounds The baseline selection is based on the signal topology we are trying to select. However, several backgrounds can result in events with a similar topology. • jets: events in which a boson is produced in association with quarks or gluons form a background if the boson decays leptonically. This is the dominant background because of its high production cross section and signal-like properties. • jets: events in which a boson is produced in association with two quarks or gluons may enter our signal sample if the boson decays to electrons or muons and one lepton falls outside the fiducial region of the detector or other mismeasurement leads to significant . • QCD non-: events in which several jets are produced, but no real boson is present, may form a background if a jet fakes an electron or muon and mismeasurement of the jet energies results in incorrectly assigning a large to the event. • : top quark pair production is a background because top quarks nearly always decay to a boson and a quark. If a boson decays leptonically, events may pass our baseline event selection criteria. • Single top: leading-order production and decay of single top quarks results in an event topology with a boson and two quarks. ### iii.3 Event vetoes In order to reduce the size of the backgrounds described above, several vetoes are imposed on events in our sample. Events are required to have no additional electrons, muons, or jets, reducing the jets, QCD non-, and backgrounds. A further +jets veto rejects events with a second loosely identified lepton with the opposite charge as the tight lepton if the invariant mass of the tight and loose lepton pair is close to the boson mass:  GeV/. events are also effectively removed after this veto. A veto developed specifically to reduce the size of the QCD non- background is imposed. This veto is more stringent for events which contain an electron candidate, since jets fake electrons more often than muons. In electron events, the minimum is raised to 25 GeV, and the transverse mass of the leptonically decaying boson candidate, , is required to be at least 20 GeV/. A variable called the significance is also defined: \raisebox1.29pt$⧸$ETsig=\raisebox1.29pt$⧸$ET√∑jetsC2JEScos2(Δϕjet,→⧸ET)ErawT,jet+cos2(Δϕ→ET,uncl,→⧸ET)∑ET,uncl, (1) where is the raw, uncorrected energy of a jet and is the correction to the jet energy jet_details (), is the vector sum of the transverse component of calorimeter energy deposits not included in any jet, and is the total magnitude of the unclustered calorimeter energies. The is a measure of the distance between the and jets or unclustered energy; it tends to be larger for stemming from a neutrino than for stemming from mismeasurement. We require and in events with an electron candidate. In muon events, the QCD veto simply requires  GeV/. We veto events with additional “loose” jets, defined as jets with  GeV and . This veto is found to improve the agreement between Monte Carlo and data in the modeling of some kinematic variables. Events consistent with photon conversion and cosmic ray muons are also vetoed stPRD (). ## Iv Modeling Both the normalization (number of events in our sample) and the shapes of signal and background processes must be understood to carry out this analysis. ### iv.1 Models used The signal processes and all background processes except the QCD non- background are modeled using events generated by a Monte Carlo program which are run through the CDF II detector simulation CDFsim (). The Monte Carlo event generators used for each process are listed in Table 1. pythia is a leading-order event generator that uses a parton shower to account for initial and final state radiation pythia (). alpgen and madevent are leading-order parton-level event generators alpgen (); madevent (); events generated by alpgen and madevent are passed to pythia where the parton shower is simulated. The top mass is assumed to be  GeV/ in the modeling of and single top events. The distributions of the longitudinal momenta of the different types of quarks and gluons within the proton as a function of the momentum transfer of the collision are given by parton distribution functions (PDFs). The CTEQ5L PDFs are used in generating all Monte Carlo samples in this analysis CTEQ (). Simulation of the QCD non- background is difficult: its production cross section is large and the probability to mimic a boson in the event is small. In addition, the mismeasurements that lead to the QCD non- background having large may not be simulated well. Therefore, this background is modeled using data rather than simulation. Events from jet-based triggers containing a jet that deposits most of its energy in the electromagnetic segment of the calorimeter, as well as events from single lepton triggers that fail lepton requirements but pass a looser set of requirements are used. ### iv.2 Expected event yields The number of events due to the signal and jets, and single top backgrounds that enter our sample are estimated based on their cross section (), the efficiency () with which they are selected, and the integrated luminosity (): . The efficiency , which includes the detector acceptance, is estimated from the Monte Carlo simulation. is taken from NLO calculations for the , , and single top processes and from the CDF inclusive boson production cross section measurement for the +jets background xsections (). As mentioned in the introduction, the and cross sections calculated at NLO are  pb and  pb respectively VVtheory (). The acceptance of these samples measured with respect to the inclusive production cross section is about 2.4% for events and about 1.2% for events. Since neither the production cross section nor the selection efficiency of the QCD non- background is known, we rely on a data-driven technique to estimate its normalization. The shape of the spectrum is very different in events with a real boson than in the events coming from the QCD non- background, as is shown in Fig. 3. The spectrum observed in data is fit with the sum of all contributing processes, where the QCD non- normalization and the jets normalization are free parameters. The fit is performed over GeV, meaning the cut on the described in the event selection above is removed. An example of the fit is shown in Fig. 3 for events with a central electron. The percentage of QCD non events in our signal sample (with the cut imposed) is estimated based on the fit; it is about 5% for events with a central electron, 3% for events with a central muon, and 3% for events in the extended muon category. The +jets normalization is a free parameter in the final likelihood fit to extract the cross section, which is described in Section VC. A preliminary estimate of the +jets normalization used in the modeling validation is derived from the fit described above. Table 2 lists the total expected number of events for signal and background processes. The background normalization uncertainties will be described in Sec. VI. ### iv.3 Background shape validation The kinematics of the background model are validated by comparing the shape of various kinematic quantities in data to the prediction from the models. Each signal and background process is normalized according to Table 2, and the sum of their shapes for a given quantity is compared to that observed in the data. Some examples of the comparisons are shown in Fig. 4 for the , the lepton , the and of both jets, the distance between the two jets (), and the of the two-jet system (). In all of these figures, the integral of the total expectation is set to be equal to the number of data events, so the figures show shape comparisons. The hatched band is the uncertainty in the shape of the backgrounds due to the jet energy scale and the scale in alpgen, described further in Sec. VI. The modeling of the kinematic quantities generally matches the data well within the uncertainties. In the case of , the systematic uncertainties do not seem to cover the disagreement between data and Monte Carlo, so an additional mismodeling uncertainty is imposed; this is described further in Sec. VI. The mismodeling uncertainty derived from also affects the modeling of correlated variables, particularly and , covering the observed disagreement between data and expectation. ## V Measurement technique The expected number of events from production is small compared to the expected number of events from +jets production. Moreover, the uncertainty on the number of +jets events expected is large due to uncertainty in the modeling of this process, making it difficult to separate the signal from the +jets background. We employ a matrix element analysis technique to improve the signal and background separation. Matrix element probabilities for various processes are calculated which are then combined to form a single discriminant. ### v.1 Matrix element event probability The matrix element method defines a likelihood for an event to be due to a given production process based on the differential cross section of the process. An outline of the procedure is given here, and full details can be found in Ref. pdongthesis (). The differential cross section for an -body final state with two initial state particles with momenta and and masses and is dσ=(2π)4|M|24√(→q1⋅→q2)2−m21m22×dΦn (2) where is a phase space factor given by dΦn=δ4(q1+q2−n∑i=1pi)n∏i=1d3pi(2π)32Ei (3) and and are the energies and momenta of the final state particles PDG (). is the matrix element of the process. We define a probability density for a given process by normalizing the differential cross section to the total cross section: P∼dσσ. (4) is not a true probability, as various approximations are used in the calculation of the differential cross section: leading-order matrix elements are used, there are integrations over unmeasured quantities (described below), and several constants are omitted from the calculation. We cannot measure the initial state momenta and the resolution of the final state measurements is limited by detector effects. As a result, we weight the differential cross section with parton distribution functions (PDFs) for the proton and integrate over a transfer function encoding the relationship between the measured quantities and the parton-level quantities . The probability density is then given by P(x)=1σ∫dσ(y)dq1dq2f(q1)f(q2)W(y,x), (5) where and are the PDFs in terms of the fraction of the proton momentum (), and is the transfer function. The PDFs are evaluated based on the CTEQ6.1 parameterization CTEQ (). Using Eqs. 2 and 3 and neglecting the masses and transverse momenta of the initial partons, the event probability is given by P(x)=1σ∫2π4|M|2f(y1)|Eq1|f(y2)|Eq2|W(y,x)dΦ4dEq1dEq2. (6) The squared matrix element, , is calculated at tree level using the helas package helas (), with the diagrams for a given process provided by madgraph madevent (). In , the lepton energy and angle, as well as the jet angles, are assumed to be measured exactly. The jet energy transfer function is derived by comparing parton energies to the fully simulated jet response in Monte Carlo events. A double Gaussian parameterization of the difference between the jet and parton energy is used. Three different transfer functions are derived: one for jets originating from quarks, one for jets originating from other non- quarks, and one for jets originating from gluons. The appropriate transfer function is chosen based on the diagram in the matrix element being evaluated. The measured missing transverse energy is not used in the calculation of the event probability; conservation of momentum is used to determine the momentum of the neutrino. After conservation of energy and momentum have been imposed, the integral to derive the event probability is three-dimensional: the energies of the quarks and the longitudinal momentum of the neutrino are integrated over. The integration is carried out numerically using an adaptation of the CERNLIB radmul routine radmul () or the faster divonne integration algorithm implemented in the cuba library cuba (). The results of the two integrators were checked against each other and found to be compatible. ### v.2 Event Probability Discriminant The matrix element event probability is calculated for the signal and processes, as well as for single top production and several contributions to the +jets background: , , , , and , where , , , and are gluons, light flavor quarks, bottom quarks, and charm quarks respectively. No matrix element calculation is carried out for the , +jets, and QCD non- background processes. All of these backgrounds require some additional assumptions, making the matrix element calculation more difficult and computationally intensive. For example, events become a background if several jets or a lepton are not detected; incorporating this in the matrix element calculation requires additional integrations which are computationally cumbersome. For the +jets background process, a lepton either fakes a jet or escapes detection, two scenarios difficult to describe in the matrix element calculation. Finally, the QCD non- background would require a large number of leading-order diagrams as well as a description of quarks or gluons faking leptons. The +jets and QCD backgrounds look very different from the signal (i.e. there will be no resonance in the dijet mass spectrum) so we expect good discrimination even without including probabilities explicitly for those background processes. The probabilities for individual processes described above (, where runs over the processes) are combined to form a discriminant, a quantity with a different shape for background-like events than for signal-like events. We define the discriminant to be of the form so that background-like events will have values close to zero and signal-like events will have values close to unity. The and are just the sum of individual probabilities for signal and background processes, but we put in some additional factors to form the event probability discriminant, or . First, as noted above, various constants are omitted from the calculations of . We normalize the relative to each other by calculating them for each event in large Monte Carlo samples. We then find the maximal over all Monte Carlo events corresponding to a given process, . The normalized probabilities are then given by . In addition, we multiply each by a coefficient, . This coefficient has the effect of weighting some probabilities more than others in the discriminant. These are optimized to achieve the best expected sensitivity based on the models. The full is then given by: EPD=nsig∑i=1CiPiPmaxinsig∑i=1CiPiPmaxi+nBG∑j=1CjPjPmaxj, (7) where the summation over signal processes runs over and () and the summation over background processes runs over , , , , and the single top diagrams () . Figure 5 shows the templates for signal and background processes normalized to unit area. The background processes all have similar shapes while the signal process falls more slowly. We validate the modeling of the for background events by comparing data and simulation in the region with  GeV/ and  GeV/, where we expect very little signal. The result of the comparison is shown in Fig. 6. The agreement between data and simulation is very good. The effectiveness of the in isolating signal-like events can be seen by plotting the invariant mass of the two jets in bins, shown in Fig. 7. This quantity is expected to have a resonance around the or boson mass for signal-like events. The bin with low values (00.25), in the top left plot, has events in the full dijet mass range from 20 to 200 GeV/. For , however, the distribution is peaked around the mass. As the range approaches unity, the expected signal to background ratio increases and the dijet mass peak becomes narrower. ### v.3 Likelihood fit The shape of the observed in data is fit to a sum of the templates shown in Fig. 5 to extract the signal cross section. The events are divided into three channels corresponding to different lepton categories: one channel for central electrons, another for central muons, and a third for events with muons collected by the plus jets trigger. A maximum likelihood fitting procedure is used. The likelihood is defined as the product of Poisson probabilities over all bins of the template over all channels: L=nbins∏i=1μniini!e−μi, (8) where and are the observed and predicted number of events in bin respectively. The prediction in a bin is the sum over signal and background predictions: μi=nsig∑k=1sik+nbg∑k=1bik (9) with the predicted contribution from background in bin . is two, corresponding to the and processes; is the number of background processes. The predicted number of events in a bin is affected by systematic uncertainties. The sources of systematic uncertainty are described in detail in Section VI. For each source of uncertainty, a nuisance parameter is introduced whose value changes the predicted contribution of a process to a bin. Each nuisance parameter has a Gaussian probability density function (p.d.f.) with a mean of zero and a width given by the 1 uncertainty. A detailed mathematical description of the way the nuisance parameters are incorporated in the likelihood is given in Ref. stPRD (). Finally, with a likelihood that is a function of the observed data, the signal cross section, the predicted signal and background contributions, and systematic uncertainties and their corresponding nuisance parameters, we extract the cross section. A Bayesian marginalization technique integrates over the nuisance parameters, resulting in a posterior probability density which is a function of the signal cross section. The measured cross section corresponds to the maximum point of the posterior probability density, and the 68% confidence interval is the shortest interval containing 68% of the area of the posterior probability density. The measured cross section is the total cross section of the signal, . Assuming the ratio between the and cross sections follows the NLO prediction, . If the ratio between the cross sections is different than the NLO prediction, we are measuring the total cross section . Here and are not assumed to follow NLO predictions. The ratio between the and acceptances is predicted from the signal simulations described in Sec. IVA. ## Vi Systematic Uncertainties Systematic uncertainties affect the normalization of background processes, the signal acceptance, and the shape of the for both background and signal processes. The sources of systematic uncertainty and the aspects of the measurement affected by each are briefly described in this section. Finally, the expected contribution of the uncertainties to the cross section measurement are explored. ### vi.1 Sources of uncertainty • Normalization of background processes: The uncertainties in the normalization of the background processes are summarized in Table 3. The uncertainty on the +jets normalization is taken to be an arbitrarily large number; the fit to extract the cross section constrains the +jets normalization to a few percent, so taking a 20% uncertainty is equivalent to allowing the +jets normalization to float. The uncertainty on the jets, , and single top backgrounds are derived from the uncertainty in their cross sections and uncertainties on the efficiency estimate. The 40% uncertainty on the QCD non- contribution is a conservative estimate based on differences observed between different choice of sample models. • Jet Energy Scale (JES): As mentioned above, jet energies are corrected for detector effects. The corrections have systematic uncertainties associated with them jet_details (). The size of the 1 uncertainty depends on the of the jet, ranging from about 3% for jet 80 GeV to about 7% for jet 20 GeV. The effect of the JES uncertainty on the measurement is estimated by creating two shifted Monte Carlo samples: one in which the energy of each jet in each event of our Monte Carlo samples is shifted by and the second in which each jet energy is shifted by , taking the -dependence of the uncertainty into account. The whole analysis is repeated with the shifted Monte Carlo samples, including the calculation of the matrix elements. The JES uncertainty has a small effect on the estimated signal acceptance because the efficiency of the jet selection depends on the JES. The size of the acceptance uncertainty is about 1%. In addition, the shape of the templates for the signal processes and for the dominant +jets background process are affected by the JES uncertainty. The change in the background shape is relatively small compared to the change in the signal shape. The signal normalization uncertainty, the signal shape uncertainty, and the background shape uncertainty are incorporated as a correlated uncertainty in the likelihood fit. • scale in alpgen: The factorization and renormalization scale, or scale, is a parameter in the perturbative expansion used to calculate matrix elements in alpgen. Higher-order calculations become less dependent on the choice of scale, but alpgen is a leading-order generator and its modeling is affected by the choice of scale. The scale used in generating our central +jets samples is , where is the mass of the boson, is the transverse mass, and the summation is over all final-state partons. alpgen +jets samples were generated with this central scale doubled and divided by two. These are taken as uncertainties on the shape of the +jets template. • Integrated luminosity: The integrated luminosity is calculated based on the inelastic cross section and the acceptance of CDF’s luminosity monitor Lumi (). There is a 6% uncertainty on the calculation, which is included as a correlated uncertainty on the normalization of all processes except the non- QCD background and the +jets background, whose normalizations are determined from fits to the data. • Initial and final state radiation: Comparison between samples simulated with pythia and Drell-Yan data, where no FSR is expected, are used to determine reasonable uncertainties for the parameters used to tune the initial and final state radiation in pythia MtopTemplate (). The signal and samples were generated with the level of ISR and FSR increased and decreased, and the change in the acceptance was estimated. This results in an uncertainty of about 5% on the signal acceptance. • PDFs: The PDFs used in generating the Monte Carlo samples have some uncertainty associated with them. The uncertainty on the signal acceptance is estimated in the same way as in Ref. MtopTemplate (). The uncertainty in the signal acceptance is found to be 2.5%. • Jet Energy Resolution (JER): A comparison between data and simulation is used to assign an uncertainty on the jet energy resolution TopWidth (). For a jet with measured of 40 GeV, the jet energy resolution is )%. The matrix element calculations are repeated for the signal Monte Carlo sample with a higher jet energy resolution, and no change in the shape of the is observed. A small () uncertainty on the signal acceptance is assigned. • +jets modeling: In addition to the shape uncertainties on the jets due to the JES and scale, we impose shape uncertainties due to mismodeling of the of the dijet system () and the of the lower- jet in the event (). We derive the uncertainty due to the mismodeling of these variables by reweighting the jets Monte Carlo model to agree with data as a function of either or . When deriving the weights, we remove events with GeV/ (the region in which we expect most of the signal) to avoid biasing the measurement towards the expected result. The mismodeling of has a negligible effect on the shape of the , whereas the mismodeling of has a small effect on its shape. • Lepton identification efficiency: There is a 2% uncertainty on the efficiency with which we can identify and trigger on leptons. This uncertainty is assigned in the same way as the uncertainty on the integrated luminosity. ### vi.2 Effect on cross section fit Pseudoexperiments are carried out to determine the expected uncertainty on the cross section. The pseudoexperiments are generated by varying the bin contents of each template histogram according to Poisson distributions as well as randomly setting a value for each nuisance parameter according to its p.d.f. The likelihood fit is applied to each pseudoexperiment to extract the cross section. In order to estimate the effect of certain systematic uncertainties, they are taken out of the pseudo-experiments one-by-one. The expected statistical uncertainty (including the uncertainty on the background normalizations) was found to be 14% while the total systematic uncertainty is expected to be 16%. The total (systematic plus statistical) uncertainty expected on the cross section is 21%. The largest predicted systematic uncertainties are the JES, scale, and luminosity uncertainties, which contribute 8%, 7%, and 6% respectively to the total uncertainty. Based on the pseudoexperiments, we can also understand which nuisance parameters are constrained in the likelihood fit. The +jets normalization uncertainty, which has a width of 20% in the prior p.d.f., is constrained on average to 1.8% in the pseudoexperiments. The first few bins of the , which are dominated by the +jets contribution, establish this constraint, and the effect of the constraint is to reduce the uncertainty in the +jets normalization in the high- bins, which are most important to the signal extraction. ## Vii Results The likelihood fit is carried out in a data sample corresponding to an integrated luminosity of 4.6 fb. The shape of the observed in data is shown superimposed on the shape expected from Monte Carlo in Fig. 8. The cross section for production is found to be  pb. This result agrees with the prediction from NLO calculations of  pb. The cross section was extracted in each lepton channel separately as a cross-check. The results are listed in Table 4. The extracted cross section agrees across lepton channels. ## Viii Fit to the dijet invariant mass A similar template fit to the one described above was carried out using the invariant mass of the two jets rather than the with exactly the same event selection and sources of systematic uncertainty. The distribution of in data is shown superimposed on the stacked predictions in Fig. 9. The templates for the fit are shown in Fig. 10. There is a resonance for the signal since the two jets are a product of or boson decay, while the backgrounds have very different shapes without an apparent resonance. The shape of the jets background is a falling distribution shaped by event selection cuts. The expected uncertainty on the cross section extracted by a fit to is about 19%, lower than the expected uncertainty when fitting the . While the statistical uncertainty is larger when fitting than when fitting the , the systematic uncertainty is smaller. The dominant systematic uncertainty is expected to be the shape uncertainty on the +jets background due to the mismodeling of , while the JES and scale uncertainties are less important than when fitting the . The cross section extracted from the fit to is  pb. Based on pseudo-experiments, the expected correlation between the fit to and the fit to is about 60%. Thus the cross sections extracted from the and the fits have a discrepancy of about 1.8. Fitting the dijet mass is presented here as a cross-check to the result from the matrix element technique because it is a less sensitive way of extracting the signal. In other words, the expected probability that the signal can be faked by the background is higher when fitting the dijet mass than when fitting the . As a result, the first observation of the signal in this channel was provided by the matrix element technique ourPRL (). With the data sample presented in this paper, the expected sensitivity of the matrix element technique is 5.0, while it is 4.6 when fitting . The observed significances are 5.4 and 3.5 for the matrix element and analyses respectively. ## Ix Conclusions We have extracted the cross section for production in the final state with a lepton, two jets, and missing transverse energy using a matrix element technique. The cross section is measured to be  pb, in agreement with the NLO theoretical prediction of  pb. The measurement is primarily systematically limited; the jet energy scale and scale uncertainties give both large contributions to the total uncertainty. Improvements to the cross section measurement could be achieved by reducing the size of the systematic uncertainties via data-driven methods. The effect of systematic uncertainties on the measurement could also be reduced by further optimization of the event selection and discriminant. ###### Acknowledgements. We thank the Fermilab staff and the technical staffs of the participating institutions for their vital contributions. This work was supported by the U.S. Department of Energy and National Science Foundation; the Italian Istituto Nazionale di Fisica Nucleare; the Ministry of Education, Culture, Sports, Science and Technology of Japan; the Natural Sciences and Engineering Research Council of Canada; the Humboldt Foundation, the National Science Council of the Republic of China; the Swiss National Science Foundation; the A.P. Sloan Foundation; the Bundesministerium für Bildung und Forschung, Germany; the Korean Science and Engineering Foundation and the Korean Research Foundation; the Science and Technology Facilities Council and the Royal Society, UK; the Institut National de Physique Nucleaire et Physique des Particules/CNRS; the Russian Foundation for Basic Research; the Ministerio de Ciencia e Innovación, and Programa Consolider-Ingenio 2010, Spain; the Slovak R&D Agency; and the Academy of Finland. ## References • (1) K. Hagiwara, S. Ishihara, R. Szalapski, and D. Zeppenfeld, Phys. Rev. D 48, 2182 (1993). • (2) J. M. Campbell and R. K. Ellis, Phys. Rev. D 60, 113006 (1999). • (3) T. Aaltonen et al. (CDF Collaboration), Phys. Rev. Lett. 104, 201801 (2010); A. Abulencia et al. (CDF Collaboration), ibid 98, 161801 (2007); • (4) V. Abazov et al. (D0 Collaboration), Phys. Rev. Lett. 103, 191801 (2009) and Phys. Rev. D 76, 111104(R) (2007). • (5) T. Aaltonen et al. (CDF Collaboration), Phys. Rev. Lett. 103, 091803 (2009). • (6) V. M. Abazov et al. (D0 Collaboration), Phys. Rev. Lett. 102, 161801 (2009). • (7) T. Aaltonen et al. (CDF Collaboration), Phys. Rev. Lett. bf 104, 101801 (2010). • (8) T. Aaltonen et al. (CDF Collaboration), Phys. Rev. Lett. 103, 101802 (2009). • (9) D. Acosta et al. (CDF Collaboration), Phys. Rev. D 71, 032001 (2005). • (10) A. Sill et al., Nucl. Instrum. Methods A 447, 1 (2000). • (11) T. Affolder et al., Nucl. Instrum. Methods A 526, 249 (2004). • (12) L. Balka et al., Nucl. Instrum. Methods A 267, 272 (1988). • (13) S. Bertolucci et al., Nucl. Instrum. Methods A 267, 301 (1988). • (14) M. Albrow et al., Nucl. Instrum. Methods A 480, 524 (2002). • (15) G. Apollinari et al., Nucl. Instrum. Methods A 412, 515 (1998). • (16) G. Ascoli et al., Nucl. Instrum. Methods A 268, 33 (1988). • (17) D. Acosta et al., Nucl. Instrum. Methods A 494, 57 (2002). • (18) E.J. Thomson et al., IEEE Trans. on Nucl. Science. 49, 1063 (2002). • (19) A. Abulencia et al., J. Phys.G 34, 2457 (2007). • (20) A. Bhatti et al., Nucl. Instrum. Methods A 566, 375 (2006). • (21) T. Aaltonen et al. (CDF Collaboration), arXiv:hep-ex/1004.1181 • (22) E. Gerchtein and M. Paulini, CHEP03 Conference Proceedings, 2003. • (23) T. Sjöstrand et al., Comput. Phys. Commun., 135, 238 (2001). • (24) M. L. Mangano et al., J. High Energy Phys. 07 (2003) 001. • (25) J. Alwall et al. J. High Energy Phys. 09 (2007) 028. • (26) J. Pumplin et al. J. High Energy Phys. 07 (2002) 012. • (27) D. Acosta et al. (CDF collaboration), Phys. Rev. Lett. 94, 091803 (2005); M. Cacciari et al., J. High Energy Phys. 09 (2008) 127; B. W. Harris et al., Phys. Rev. D 66, 054024 (2002). • (28) P. J. Dong, Ph.D. Thesis, University of California at Los Angeles, 2008, FERMILAB-THESIS-2008-12. • (29) C. Amsler et al. Phys. Lett. B 667, 1 (2008). • (30) I. Murayama, H. Watanabe and K. Hagiwara, Tech. Rep. 91-11, KEK (1992). • (31) A. Genz and A. Malik, J. Comput. Appl. Math. 6, 295 (1980); implemented as CERNLIB algorithm D120, documented at http://wwwasdoc.web.cern.ch/wwwasdoc/shortwrupsdir/d120/top.html. • (32) T. Hahn, Comput. Phys. Commun. 168, 78 (2005). • (33) D. Acosta et al. Nucl. Instrum. Methods A 494, 57 (2002). • (34) A. Abulencia et al. (CDF Collaboration), Phys. Rev. D 73, 032003 (2006). • (35) T. Aaltonen et al. (CDF Collaboration), Phys. Rev. Lett. 102, 042001 (2009). You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
2019-11-20 01:20:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7957096099853516, "perplexity": 1133.6049960018993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670389.25/warc/CC-MAIN-20191120010059-20191120034059-00477.warc.gz"}
https://physics.stackexchange.com/questions/184367/laplace-beltrami-vs-dalembert-operators-in-flat-vs-curved-space-time
# Laplace-Beltrami vs d'Alembert operators in flat vs curved space-time I am confused with the difference between Laplace-Beltrami (LB) and d'Alembert operators in flat/curved space-time. d'Alembert operator in flat space-time (Minkowski) is defined as $$\Box= \partial^\mu \partial_\mu = g^{\mu\nu} \partial_\nu \partial_\mu = \frac{1}{c^{2}} \frac{\partial^2}{\partial t^2} - \frac{\partial^2}{\partial x^2} - \frac{\partial^2}{\partial y^2} - \frac{\partial^2}{\partial z^2}$$ i.e. it is the standard Euclidean Laplace operator ($\Delta = \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + \frac{\partial^2}{\partial z^2}$) + time derivative (with the opposite sign). However, since scalar wave equation in curved space-time can be written using d'Alembert operator $$\Box\phi \equiv \frac{1}{\sqrt{-g}} \partial_{\mu} \left(\sqrt{-g}g^{\mu\nu}\partial_{\nu}\phi \right),$$ and Laplace-Beltrami operator is defined as $$\nabla^2 f = \frac{1}{\sqrt {|g|}} \partial_i \left(\sqrt{|g|} g^{ij} \partial_j f \right)$$ is then LB operator just d'Alembert operator in "3D"? Mathematically speaking they are the same operator. Usually we reserve the d'Alembertian for 3+1 dimensional spacetime (so in absence of curvature it takes the form $\partial_0^2 - \nabla^2$), while the Laplace-Beltrami operator is defined for an aribtrary dimensional manifold with arbitrary signature. The only possible difference is that sometimes (not always, though), $\Box$ is defined as $\partial_0^2 - \partial_1^2 - \partial_2^2 - \partial_3^2$ independently of signature, so if your metric is $(-+++)$ then you will have $\Box = -\nabla^2$, where $\nabla^2$ here means the LB operator.
2020-11-26 21:37:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9777408838272095, "perplexity": 538.5514382700323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188947.19/warc/CC-MAIN-20201126200910-20201126230910-00249.warc.gz"}
http://math.stackexchange.com/questions/92755/non-linear-ode-y2-y-xy/92855
# Non-linear ODE: $(y')^2 + y = xy'$ I'm sure it's staring at me, but how does one solve this? $$(y')^2 + y = xy'$$ Thanks. - That's Clairut's equation. Wolfram alpha gives a clean solution step by step in this case, just click the 'show steps' button. –  H. M. Šiljak Dec 19 '11 at 16:34 That should have been Clairaut, actually. –  H. M. Šiljak Dec 19 '11 at 16:35 One solution is $y = x-1$? –  AD. Dec 19 '11 at 16:44 @H.M.Šiljak: I think you meant this one. –  Gigili Dec 19 '11 at 16:49 Ok, got it. Differentiating both sides wrt x 2y'y'' + y' = y' + xy'' i.e., y''(2y'-x) = 0 which can now just be solved for each case in turn, y'' = 0 and 2y' = x Thanks everyone. –  Simon S Dec 19 '11 at 16:50 $$2y^\prime y^{\prime\prime}+y^\prime =y^\prime+xy^{\prime\prime}$$ i.e., $y^{\prime\prime}(2y^\prime −x)=0$ which can now just be solved for each case in turn, $y^{\prime\prime}=0$ and $2y^\prime=x$ Note that not every solution to $y'' = 0$ and $2y' = x$ automatically works because the differentiation step is not reversible. But it's easy to plug the solutions to those into the original equation and find out which solutions do work. –  Zarrax Dec 20 '11 at 1:54
2015-06-30 08:40:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9343200922012329, "perplexity": 1120.4924728716903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375091925.14/warc/CC-MAIN-20150627031811-00075-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.oreilly.com/library/view/recent-advancements-in/9781000210200/xhtml/12_Chapter03.xhtml
3Universal α-graceful Gear related Graphs Divya K. Jadeja Department of Mathematics,Saurashtra University,Rajkot, Gujarat (INDIA)E-mail:divyajadeja89@gmail.com V. J. Kaneria Department of Mathematics,Saurashtra University,Rajkot, Gujarat (INDIA)E-mail: kaneriavinodray@gmail.com In 1967 Rosa defined graceful labeling and α-labeling. A graph G, a graceful labeling f is called α-labeling if there is a non-negative integer k(0 ≤ k ≤ |E(G)|) such that min{f(u), f(v)} ≤ $math$ Here we call a graph G which admits a α -labeling as a α-graceful graph. A vertex v ∈ V(G) is called an extreme vertex for G, if there is an α -graceful labeling f on G such that f(v) = 0. A graph G is called a universal α-graceful graph if all of its vertices ... Get Recent Advancements in Graph Theory now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
2021-10-23 22:05:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 1, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23394432663917542, "perplexity": 4984.443645280669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585768.3/warc/CC-MAIN-20211023193319-20211023223319-00261.warc.gz"}
http://mathematica.stackexchange.com/tags/assumptions/hot
Tag Info 25 I can explain this. The definite flavor of Integrate works with assumptions in a few ways. One is to use them in Simplify, Refine, and a few other places that accept Assumptions, to do what they will in the hope of attaining a better result (it also uses them to determine convergence and presence or absence of path singularities). Those places also get the $... 20 The most direct way to test this is probably the following:$Assumptions = x > 0; Element[x, Reals] // Simplify (* Out[1]= True *) $Assumptions = True; Element[x, Reals] // Simplify (* Out[4]= x ∈ Reals *) So$x>0$seems to imply that$x$is real. 17 You can modify the global system variable$Assumptions, to get the effect you want: $Assumptions = aa[t] > 0 Then Integrate[D[yy[x, t], t]^2, {x, 0, 18}] 10.1601 Derivative[1][aa][t]^2 This may, however, be somewhat error-prone. Here is how I'd do this with local environments. This is a generator for a local environment: createEnvironment[... 17 The problem is due to Mathematica thinking that the version with the Re[] is actually simpler. This is because the default complexity function is more or less LeafCount[], and In[332]:= ArcTan[-Re[x+z],y]//FullForm Out[332]//FullForm= ArcTan[Times[-1,Re[Plus[x,z]]],y] whereas In[334]:= ArcTan[-x-z,y]//FullForm Out[334]//FullForm= ArcTan[Plus[Times[-1,x],... 17 It is assumed that$x$is a real number. Everything else would mathematically not make sense because on complex numbers there does not exist an ordering relation. An example would be to take the expression$\sqrt{x^2}$and to imagine that this is not equal$x$for$x=-\mathbb{i}$. Therefore the expression is in a general form not simplified In[37]:= Sqrt[x^... 15 The behaviour you observed is completely independent of NumericQ. It could also be seen with a function foo which has the initial definition foo[_]=False. Example 1: Initially you define NumerixQ[x]=True, which tells Mathematica that whenever it evaluates the expression NumericQ[x], it should evaluate to True. Since for symbols, it is pre-defined to return ... 15 There's a bit more to the story. Mathematica treats variables as complex by default, and I for one have had trouble figuring out how Limit figures out how to treat variables such as c in this case. Some analysis First, let's examine a0 (= a in OP) with the assumption thatc is real: a0 = (h^2 + c^2 h^2 + Sqrt[4 h^2 + (h^2 + c^2 h^2)^2])^2 / (4 (h^2 ... 14 For Integrate as well as for Simplify, Refine FunctionExpand, Limit etc. there is an option Assumptions: Integrate[ 1/Sqrt[ z^2 + u^2], {z, -l, l}, Assumptions -> (u | l) ∈ Reals] ConditionalExpression[ 2 ArcSinh[ l/Abs[ u]], u != 0 && l >= 0] or one can use Assuming[ (u | l) ∈ Reals, Integrate[ 1/Sqrt[ z^2 + u^2], {z, -l, l}]] the ... 14 Integrate can take the option Assumptions. Integrate[1/Sqrt[z^2 + u^2], {z, -l, l}, Assumptions -> u > 0 && l > 0 && Element[u | l, Reals]] ==> 2 Log[(l + Sqrt[l^2 + u^2])/u] Alternatively use Assuming. Assuming[u > 0 && l > 0 && Element[u | l, Reals], Integrate[1/Sqrt[z^2 + u^2], {z, -l, l}]] ==&... 14 Working with such a sophisticated function as Reduce, if we can't get the result initially we should add possibly many assumptions. Without the Backsubstitution option it yielded: Reduce[ Abs[x] + Abs[y] + Abs[z] + Abs[t] == 1 && t != 0, {x, y, z, t}, Reals] No more memory available. Mathematica kernel has shut down. Try quitting other ... 12 In Simplify[%,a>0] the symbol > is a logical operator. In Simplify[%,a=0] the symbol = is not a logical operator. You must use the logical operator Equal, so Simplify[%,a==0] works fine! Example: (a + b)^2 // Expand a^2 + 2 a b + b^2 Simplify[%, a == 0] gives b^2 the right answer. :-) Orleo 12 Here is a quick description. GenerateConditions -> False will both skip some code for checking parameter regions of validity for an integral, and also a regularized integral might be computed. This interface should probably be improved but I've no idea if or when that might happen. GenerateConditions -> Automatic behaves like True for single definite ... 12 To make your integral convergent, you should have assumed m > Sqrt[u + 1]; then, you shouldn't have assumed other conditions for m. If we do that, we get a pretty nice result : int[u_, m_] = Integrate[ 1/Sqrt[(s^2 - u)^2 - 1], {s, m, Infinity}, Assumptions -> u > 2 && m > Sqrt[u + 1]] EllipticF[... 11 It is a bug in Series. Note that a := (h^2 + c^2 h^2 + Sqrt[4 h^2 + (h^2 + c^2 h^2)^2])^2/( 4 (h^2 + 1/4 (h^2 + c^2 h^2 + Sqrt[4 h^2 + (h^2 + c^2 h^2)^2])^2)) b = FullSimplify[a] Series[a,{h,Infinity,0}] (* Out: 1 + O[1/h]^2 *) Series[b,{h,Infinity,0}] (* Out: O[1/h]^2 *) The fact is that for$h\to\infty$there are two terms cancelling each other in the ... 10 This is a known limitation in Series and Limit. Series does not handle roots in a flawless manner. For example, here is an expansion at the branch point (zero) that is only "half" correct. In[4]:= Series[Sqrt[x^2], {x,0,2}] 3 Out[4]= x + O[x] This is fine for re(x)>0, but not so good for re(x)&... 9 You should assume that your variables are real, (if you want M to proceed further) because Mathematica treats variables in general as complex. One of many ways to do it : expr = A ((Cos[k y] + I Sin[k y]) 2 I Sin[t ω]); Refine[ Im[ expr], (A | k y | t ω) ∈ Reals] 2 A Cos[k y] Sin[t ω] We needn't use ComplexExpand defining expr, but in this case it ... 9 In your particular examples, PowerExpand[Log[x^a]/a] evaluates to Log[x], and PowerExpand[1/a*Log[(x + Log[x]*Cos[x])^a]] also works. EDIT: To be clear, and as commented upon by Andrzej, PowerExpand may give wrong answers. See the documentation, in particular this. EDIT2: Does something like (1/a*Log[(3*Exp[-1/x]*Sqrt[1 - Exp[1/x]])^a]) //. Log[Times[x_, ... 9 Simplification in Mathematica is often a black art, and requires great use of your own intuition and knowledge to be effective. That said, I bring your attention to the series from of$\DeclareMathOperator{\erfi}{erfi}\erfi(z)$, $$\erfi(z) = \frac{2}{\sqrt{\pi}}\sum^\infty_{k=1} \frac{z^{(2k+1)}}{k! (2k+1)}.$$ Consider what happens when we use that to ... 9 Mathematica is a term rewriting system, variables need not to be declared as in compiled languages. For a general view I recommend reading this post by Leonid Shifrin. In general, symbolic variables are processed as complex if not assumed otherwise. To specify assumptions there are a few ways :$Assumptions are recommended when you want to use global ... 9 Let's define : a1 = (h^2 + c^2 h^2 + Sqrt[4 h^2 + (h^2 + c^2 h^2)^2])^2/(4 (h^2 + 1/4 (h^2 + c^2 h^2 + Sqrt[4 h^2 + (h^2 + c^2 h^2)^2])^2)); a2 = a1 // Simplify a3 = a1 // FullSimplify Mathematica 7 and 9 Limit[ #, h -> Infinity]& /@ {a1, a2, a3} {1, 1, 0} while assuming that c is a real number : Limit[#, h -> Infinity, ... 8 I would avoid applying PowerExpand to anything except very simple expressions, since PowerExpand can easily return incorrect answers. For example, PowerExpand[Sqrt[(1 - x)^2] + Log[(x - 1)^2]] will return 1 - x + 2 Log[-1 + x], which is wrong, except when x=1 and both expressions are infinite. So if you only want to expand logarithms, it is better to use ... 8 Maximize does not take $Assumptions into account by default, but wants the assumptions to be given explicitly: Assuming[Abs[x]>=3,Maximize[-x^2,x]] (* ==> {0, {x -> 0}} *) Maximize[{-x^2,Abs[x]>=3},x] (* ==> {-9, {x -> -3}} *) However you can inject$Assumptions explicitly: Assuming[Abs[x]>=3,Maximize[{-x^2,$Assumptions},x]] (* ==&... 8 In general the situation is much more subtle than the other answers suggest. For example this issue is present in version 8 while not in version 7 : Integrate[ Exp[-a^2] Sin[2 t] (a^2 + b^2 + b*Cos[t] + a*Sin[t]), {t, 0, 2 Pi}]$Assumptions = {x > 0}; Integrate[ Exp[-a^2] Sin[2 t] (a^2 + b^2 + b*Cos[t] + a*Sin[t]), {t, 0, 2 Pi}] 0 8/3 Sqrt[a^2 + b^2]... 8 We need an appropriate complexity function. There were a few questions on this topic but in general, it is not obvious how to design an adequate function and it may appear quite difficult. Moreover there have been certain hidden changes of ComplexityFunction in Mathematica 9 (see: FullSimplify does not work on this expression with no unknowns. By default ... 8 An even much faster way to accomplish this is: ComplexExpand[RotationMatrix[fi, {x, y, z}], TargetFunctions -> {Re, Im}] // FullSimplify 8 df2 = D[df1, μ]; $Assumptions = Flatten[{Thread[{c1, c2, λ, μ} > 0], Element[{c1, c2, λ, μ}, Reals], μ > λ}]; FullSimplify@Positive[df2] (* True *) FullSimplify@Sign[df2] (* 1 *) Or, you can use your assumptions directly as the rhs of the Assumptions option, or as the first argument of Assuming, without setting the value of the global variable$... 7 We get correct results if we act ComplexExpand on the integrand ComplexExpand @ Exp[-a w^2 + b I w^3] E^(-a w^2) Cos[b w^3] + I E^(-a w^2) Sin[b w^3] 1. Integrate[ ComplexExpand @ Exp[-a w^2 + b I w^3], {w, -Infinity, Infinity}] ConditionalExpression[ (2 a E^((2 a^3)/(27 b^2)) BesselK[1/3, (2 a^3)/(27 b^2)])/( 3 Sqrt[3] Abs[b]), ... 7 I think this is a bug. Close enough expressions yield better results, e.g. FullSimplify[ ArcTan[ -# Re[x + z], y], (x | y | z) \[Element] Reals ] === ArcTan[ -# (x + z), y] & /@ { 1.0, 1, Sqrt[1.], Exp[0.], 1 - 0., 2, a} {True, False, True, True, True, True, True} The problem seems to be specific for a ... 7 You can use /. (ReplaceAll) : % /. a->0 Simplify[%,a=0] produces an error ( this expression a = 0 cannot be used as an assumption) because it means just setting the value zero to the variable a, in another form Set[ a, 0], see Set. In some cases, when there are more variables which depends on another ones you may need the repeated replacement for ... 7 Usually simplifying the result with appropriate assumptions gives desired result: m={{3, 2, 1}, {3, 1, 2}, {2, 3, -1}, {-(3/b), -(3/b^2) - 2/b, -(3/b^3) - 2/b^2 - 1/b}, {-(3/b), -(3/b^2) - 1/b, -(3/b^3) - 1/b^2 - 2/b}, {-(2/b), -(2/b^2) - 3/b, -(2/b^3) - 3/b^2 + 1/b}}; Simplify[PseudoInverse[m], b \[Element] Reals] Only top voted, non community-wiki answers of a minimum length are eligible
2016-07-29 19:46:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35456427931785583, "perplexity": 4285.072923863765}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257831771.10/warc/CC-MAIN-20160723071031-00239-ip-10-185-27-174.ec2.internal.warc.gz"}
https://thephilosophyforum.com/discussion/6530/what-is-the-difference-between-actual-infinity-and-potential-infinity
## What is the difference between actual infinity and potential infinity? • 4.2k I'm confused by the distinction actual vs potential infinity? From wikipedia I get: Potential infinity is a never ending process - adding 1 to successive numbers. Each addition yields a finite quantity but the process never ends. Actual infinity, if I got it right, consists of considering the set of natural numbers as an entity in itself. In other words 1, 2, 3,.. is a potential infinity but {1,2, 3,...} is an actual infinity. In symbolic terms it seems the difference between them is just the presence/absence of the curly braces, } and {. Can someone explain this to me? Thanks. • 3k Looks like the difference between Platonism and Constructivism. If you think mathematical objects are real and have existence beyond humans calculating or proving them, then infinity is actual. If you don't, then it's only potential, because we'll never add all the way up to infinity. • 416 Formally speaking, actual infinity merely refers to the axiom-of-infinity. But this isn't a good answer, because this does not account for the controversial motives for introducing the axiom. The underlying dilemma is the result of different interpretations of the informal sign "..." used to denote partially elicited sets, and how these different interpretations lead to different conclusions concerning the very meaning of a set, including what sort of sets are admissible in mathematics. Ordinarily, in statements such as {0,1,2,3,...}, the sign "..." is used to state that the "set" refers to a rule (as in this case, the rule of adding one and starting from zero), as opposed to an actually completed and existent body of entities. This is synonymous with potential infinity, that appeals to one's temporal intuitions regarding a process whose state is incremented over time. In other cases such as "my shopping list is {Chicken,wine,orange juice,...} ", the dots might denote either i) an abbreviation for a particular, finitely describable list that is already existent, but only partially described on paper or ii) An indication that a list is abstract and only partially specified, that the reader is invited to actualize for himself via substituting his own items, or rule of extension. or iii) a mystical sign, referring to "actual infinity" in a sense that is empirically meaningless, physically useless and logically a mere piece of syntax, but which nevertheless has psychological value in causing giddy vertigo-like sensations in true-believers when they contemplate the unfathomable. Unfortunately, because "..." is informal notation with at least three completely distinct operational uses in addition to having private psychological uses, people continue to conflate all of these uses of the dots, causing widespread bewilderment, philosophical speculation and moral panic up to the present day. • 803 Actual infinity, if I got it right, consists of considering the set of natural numbers as an entity in itself. In other words 1, 2, 3,.. is a potential infinity but {1,2, 3,...} is an actual infinity. In symbolic terms it seems the difference between them is just the presence/absence of the curly braces, } and {. Technically, I think that it should be #{1,2, 3,...} or card({1,2, 3,...}) or |{1,2, 3,...}| for actual infinity (cardinality symbols). 1,2, 3,... is just a sequence and not a set. sequence: Unlike a set, the same elements can appear multiple times at different positions in a sequence, and order matters. In fact, there is another notation that is very close to set and sequence: a tuple or n-tuple: (1,2, 3). tuple: In mathematics, a tuple is a finite ordered list (sequence) of elements. An n-tuple is a sequence (or ordered list) of n elements, where n is a non-negative integer. A tuple has a finite number of elements, while a set or a multiset may have an infinite number of elements. Now, to confuse the hell out of everybody, the arguments of a function are deemed a tuple, but the typical notation for variadic functions (=with variable number of arguments) is f(a,b,c, ...), while the use of the ellipsis "..." is forbidden in tuples. Furthermore, all these things are almost the same, with just a minute subtlety here and there ... ;-) • 4.2k Axiom of infinity. That's as subtle as a gun in your face I guess. I don't know. Am I making sense here? • 803 Axiom of infinity. That's as subtle as a gun in your face I guess. I don't know. Am I making sense here? Yes, I think it is. It is certainly what Wikipedia says.. The mathematical meaning of the term "actual" in actual infinity is synonymous with definite, completed, extended or existential,[4] but not to be mistaken for physically existing. The question of whether natural or real numbers form definite sets is therefore independent of the question of whether infinite things exist physically in nature. Of course, as it says, any representation as to whether physical infinite exists in the real, physical world is obviously out of scope in mathematics. • 856 The difference is presentation versus application; simulation versus stimulation. • 781 In symbolic terms it seems the difference between them is just the presence/absence of the curly braces, } and {. Actual infinity is not possible; there could always be more. Infinite is not an amount or an extent completed or capped, as extant, as that can't happen. • 991 In other words there is a largest number and infinity doesn't exist. But you will never arrive at this number, because in order to do so you'd have to take an infinite number of steps. Which is why we call it an infinity. • 1 Don't look at infinity as a number (it's not). Look at it something like this. Infinity represents something that can get as big as you want. The whole idea of limits is based on this concept. Taking a limit of a number and assigning it a value would mean you can get as close as you want to that value (doesn't mean you can actually get that value). • 191 to take an infinite number of steps If l say, l have taken 100 steps, l am using the word "steps" in a different sense, to mean a numerical quantity. When l say infinite number of steps, the word "steps" specifies the nature of steps ( they do not end) in such a system, it does not refer to a numerical quantity. Therefore we cannot such a definition for infinity. It is very difficult to define infinity using any concept other than infinity itself. Hence it is often circular, self referential. • 191 In maths, how would you interpret limits that do not exist to those that exist. For example, lim x-->o ( 1/x )does not exist but lim x--> 1 ( x-1/x^2-1) . The second one has removable singularity. Somehow we can assign useful value when using infinity but not always,so there are problems sometimes • 191 Infinity doesn't exist as a number but a concept and l think even the concept has faults. • 803 In other words there is a largest number and infinity doesn't exist. Without the axiom of infinity, a concept of actual infinity is not viable. That is obviously also the reason why the axiom was introduced. Otherwise, there would simply be no need for it. The use of actual infinity is not even permitted in mathematics without axiomatizing it first. Therefore, it is perfectly ok for you to reject the axiom, but then you can also not make use of any of its consequents. Since it is the sixth axiom in ZFC, you cannot make use of ZFC either. You will need to use an alternative set theory (of which there are actually many). One possible problem could be that you cannot make use of any of the large number of theorems that rest on ZFC, unless they do not make use the axiom of infinity. However, it is a lot of work to weed through all of that, because it requires verifying their proofs. When rejecting ZFC, a lot of things that you would do in set-theoretical context will now be incompatible with mainstream set theory. Welcome to Hassle-land where everything that would have been simple, now becomes complicated! • 4.2k The axiom of infinity. So an infinite set is postulated to exist. My statement about the infinite set of natural numbers was poorly worded. What I should have said is that a largest natural number exists by the following argument: Let's look at the sequence of natural numbers which I think is the "simplest" infinity we can talk about. Natural numbers: 1, 2, 3,... Observe how successive numbers "increase" a) 1 to 2 the quantity has doubled (2 = 2 × 1) b) 2 to 3 : (3 = 1.5 × 2) c) 3 to 4 : (4 = 1.33... × 3) d) 4 to 5 : (5 = 1.25 × 4) . . .as you can see the factor (numbers in bold) is decreasing and approaching a limit which is 1. Look at larger numbers below: e) 9999 to 10000 : (10000 = 1.001... × 9999) f) 99999 to 100000 : (100000 = 1.0001... × 99999) The pattern suggests that eventually there will be two very very large numbers A and B such that: 1. B = A + 1 (B is the next number we getting by adding 1 to A) 2. B = A × 1 = A (the pattern I showed you suggests that 1 is the limit of the factor by which a number increases in bold) In other words there is a largest natural number. • 191 The set of natural numbers does not have an upper bound, so it will always have a number that is smaller than another number. In other words, there is no largest number. If you disagree with the axiom that a set can have infinite elements, then it is possible to say that there is a certain largest number in a set but otherwise no. The problem with axiom of infinity is that it fails to fall in one of the two categories. Intension and extension. intensional definition gives the meaning of a term by specifying necessary and sufficient conditions for when the term should be used. This is the opposite approach to the extensional definition, which defines by listing everything that falls under that definition Some logician view that infinite extensions are meaningless as extensions must be complete in order to be well defined, so infinity cannot be defined by extensions. ( They reject Cantors proof too ) The problem with definition using intention is that they are circular. • 842 I'm confused by the distinction actual vs potential infinity? There's a straightforward and unambiguous mathematical distinction. The inductive axiom of the Peano axioms say that whenever n is a number, n + 1 is a number. So we have 0, and 1, and 2, and 3, ... [The fact that 0 is a number is another axiom so we can get the induction started]. However we never have a "completed" set of them. In any given application we have as many numbers as we need; but we never have all of them assembled together into a single set. The axiom of infinity says that there is a set containing all of them. So with the Peano axioms we may write: 0, 1, 2, 3, ... With the axiom of infinity we may write: {0, 2, 3, ...}. The brackets mean that there is a single completed object, the set of all natural numbers. That's Cantor's great leap. To work out the mathematical consequences of completed infinity. I'm sure from a philosophical point of view there may be some quibbles. But this is how I think of it. the axiom of mathematical induction gives you potential infinity. The axiom of infinity gives you completed infinity. Note that even with potential infinity, there are still infinitely many numbers. It's just that we can't corral them all into the barn. In fact in Peano arithmetic, the collection of all the natural numbers is a proper class. This is a good way to visualize what we mean when we say that a given collection is "too big" to be a set. Axiom of infinity. That's as subtle as a gun in your face I guess. I don't know. Am I making sense here? Yes perfect sense. The axiom of infinity is a humongously ambitious claim for which there's currently no evidence in the real world. It's a bold statement. On the other hand without it, we can't get a decent theory of the real numbers off the ground. So the ultimate reason to adopt the axiom of infinity is pragmatic. It gives a much more powerful theory. Whether it's "true" in any meaningful sense is, frankly, doubtful. • 28 There are two different domains of discussion: (1) mathematics itself and (2) philosophy of mathematics. (1) MATHEMATICS ITSELF (There are forms of mathematics other than classical set theoretic mathematics, but for brevity by 'mathematics' I mean ordinary classical set theoretic mathematics.) In mathematics we don't ordinarily think in terms of a noun 'infinity' but instead of the adjective 'is infinite'. There is no object (abstract of otherwise) named by 'infinity' (setting aside in this context such things as points of infinity in the extended real system). Rather the adjective 'is infinite' holds for some sets and not for others. Formal definitions of 'finite' and 'infinite': A set S is finite if and only if S is in one-to-one correspondence with a natural number. A set S is infinite if and only if S is not finite. In mathematics itself there is not a formal set theoretic notion of 'potentially infinite'. Mathematics instead proceeds elegantly without undertaking the unnecessary complication of devising a formal definition of 'potentially infinite'. (1) PHILOSOPHY OF MATHEMATICS In the philosophy of mathematics, the distinction between actually infinite and potentially infinite might be described along these lines: Actually Infinite. An actually infinite set is an object (presumably abstract) that has infinitely many members. The set of natural numbers is an actually infinite set. Potentially Infinite. There are some philosophers or commenters on mathematics who do not accept that there are actually infinite sets. So for them there is no set whose members are all the natural numbers. Instead these commenters refer to processes that are always finite at any point in the execution of the process but that have no finite upper bound, so that for any step in the execution, there is always a next step available. For example, with counting of natural numbers, only finitely many natural numbers are counted at any given step, but there is always a next step allowed. In constructive mathematics (not classical mathematics), perhaps, with research, one can find formal systems with a formal definition of 'potentially infinite'. But I would bet that any such system would be a lot more complicated and more difficult to work within than classical mathematics. This is the drawback of the notion of 'potentially infinite'. One can talk about it philosophically, but it takes a lot more work to devise a formal system in which 'potentially infinite' is given an exact, formal definition. Looks like the difference between Platonism and Constructivism. It is not necessary to adopt platonism to accept that there are infinite sets. One may regard infinite sets as abstract mathematically objects, while one does not claim that abstract mathematical objects exist independently of consciousness of them. it should be #{1,2, 3,...} or card({1,2, 3,...}) or |{1,2, 3,...}| for actual infinity No, that is not required. (1) There are infinite sets that are not cardinals. (2) Let w (read as 'omega') be the set of natural numbers. So w = {x | x is a natural number}. That is what is meant by {0 1 2 ...} (I drop unnecessary commas). And w itself is a cardinal, and for any cardinal x, we have card(x) = x anyway. Here is an explication of 'set', 'tuple', 'sequence', 'multiset' in (set theoretic) mathematics: Everything is a set, including tuples, sequences, and multisets. A tuple is an iterated ordered pair. Definitions: {p q} = {x | x = p or x = q} {p} = {p p} <p q> = {{p} {p q}} Then also, for example, <p q r s t> = <<<p q> r> s> t> S is a sequence if and only if S is a function whose domain is an ordinal. S is a finite sequence if and only if the domain of S is a natural number. (There is an "isomorphism" between tuples and finite sequences. For example: The tuple <x y z> "encodes the same information" as the sequence {<0 x> <1 y> <2 z>}.) S is a denumerable sequence if and only if the domain of S is w. S is a multiset if and only if S is of the form <T f> where f is a function whose domain is T and every member of the range of f is a cardinal. (So f "codes" how many "occurences" there are of the members of T in the multiset.) It is very difficult to define infinity using any concept other than infinity itself. Hence it is often circular, self referential. There is no circularity in the set theoretic definition of 'is infinite'. Without the axiom of infinity, a concept of actual infinity is not viable. Depends on what you mean by 'viable'. There is a set theoretic definition of 'is infinite' without the axiom of infinity. The axiom infinity implies that there exists a set that is infinite, but we don't need the axiom just to define 'is infinite'. I think you were pretty much saying that yourself, but I wish to add to it. Indeed, we agree that dropping the axiom of infinity makes an axiomatic treatment of mathematics extremely complicated. Your claimed proof that there is no infinite set is not recognizable as a proper mathematical argument but instead proceeds by hand waving non sequitur. The set of natural numbers does not have an upper bound, so it will always have a number that is smaller than another number. No, there is no natural number smaller than the natural number 0. So maybe you meant that for any natural number n there is a natural number greater than n. The problem with axiom of infinity is that it fails to fall in one of the two categories. Intension and extension. "intensional definition gives the meaning of a term by specifying necessary and sufficient conditions for when the term should be used." "This is the opposite approach to the extensional definition, which defines by listing everything that falls under that definition." That is irrelevant because the axiom is not a definition and does not need to meet any standards of definitions. Also, we have to distinguish between two different notions of extensional/intensional. Aside from yours, there is the notion of extensionality that applies to set theory: Sets are extensional because they are determined solely by their members. That is, S = T if for all x, x is a member of S if and only if x is a member of T. And it doesn't matter whether a set is described by what you call 'intension' (such as {x | x has property P}) or, for finite sets, by finite listing in braces. For example, {x | x is a natural number less than 3} = {0 1 2}. Of course, infinite sets don't have listings such as {0 1 2}, but that does not vitiate that they exist. Some logician view that infinite extensions are meaningless as extensions must be complete in order to be well defined, so infinity cannot be defined by extensions. ( They reject Cantors proof too ) The problem with definition using intention is that they are circular. Maybe there are such logicians, but even constructivists accept the proof of Cantor's Theorem and Cantor's proof of the uncountability of the reals. And there is no circularity in the definitions of set theory. Mathematical definitions are not circular (that is, if a purported definition is circular then somewhere in the formulation of the purported definition there is a violation of the formulaic rules for mathematical definition). • 28 in Peano arithmetic, the collection of all the natural numbers is a proper class. I wouldn't state it that way. If we mean first order Peano arithmetic (PA), then there are not in PA definitions of 'set', 'class', and 'proper class'. Meanwhile, in set theory, the domain of the standard model of PA is a set. • 28 [start quote of post] This is a question from an elementary math book: u = u + 1. (i) Find the value of u (ii) What is the difference between nothing and zero? If you try and solve u = u + 1 you'll get 0 = 1 (subtracting a from both sides) 0 = 1 is a contradiction. So u is nothing. u is NOT zero. u is nothing. Why? Take the equation below: e + 1 = 1 Solving the equation for e gives us e = 0. The same cannot be said of u = u + 1 our first problem. So given the above equations ( u = u + 1 AND e + 1 = 1) we have the following: 1) u is NOTHING. u is NOT zero 2) e = zero What's the difference between NOTHING and zero? My "explanation" is in terms of solution sets. The solution set for u = u + 1 is the empty set { } with no members The solution set for e + 1 = 1 is {0} with ONE member viz. zero. There's another mathematical entity that can be used on the equation u = u + 1 and that is INFINITY. INFINITY + 1 = INFINITY So we have: a) u is NOTHING b) u is INFINITY Therefore, NOTHING = INFINITY Where did I make a mistake? Thank you. [end quote of post] (1) What math book is that? What is the context? What does the variable 'u' range over? What specific operation does '+' stand for? (2) There is no mathematical object named 'infinity' (unless it's something like a point of infinity in the extended real system - and in a context like that, the operations of addition and subtraction have special modified formulations that avoid such contradictions). And if infinite sets are meant, then operations such as cardinal addition or ordinal definitions are formulated so that they may not be confused with the operations of addition on natural numbers or on real numbers. (3) Your "nothing = infinity" is just wordplay. As mentioned, there is not an object named 'infinity'. And 'nothing' also is not the name of a mathematical object. To say something like "nothing is not equal to itself" is not saying that there is an object named 'nothing' that has the property of not being equal to itself. Rather, it means that there is no object that has the property of not being equal to itself. So then putting an equal sign between 'nothing' and 'infinity' is nonsense. • 803 Everything is a set, including tuples, sequences, and multisets. There is an "isomorphism" between tuples and finite sequences. For example: The tuple <x y z> "encodes the same information" as the sequence {<0 x> <1 y> <2 z>}. What would be the operator in the isomorphism? Otherwise, without such operator, isn' it just a bijection? It is just a mapping between two sets, no? Still, in my impression, the definition for morphism may be a bit ambiguous because in category theory they do not really seem to insist on the presence of such operator, while in abstract algebra they absolutely do. By the way, I find abstract algebra much more accessible than certainly the deeper caves of category theory. It is only when they sufficiently overlap that it is clear to me ... • 4.2k :ok: :up: • 28 I wrote "isomorphism" in scare quotes because I don't mean an actual function. I mean that tuples and sequences are "isomorphic" in that you can recover the order from one to the other and vice versa. This can be expressed exactly, but it's a lot of notation to put into posts such as these. Anyway, the general idea is obvious and used in mathematics extensively. • 4.2k Your claimed proof that there is no infinite set is not recognizable as a proper mathematical argument but instead proceeds by hand waving non sequitur Thank you and can you be more specific. There are quite a number of steps I went through in my "proof". The final steps in my proof: 1. b = a + 1 (just like 3 = 2 + 1) 2 b = a × 1 but as @Echarmion said I think a = b = infinity. Math breaks down at both ends of the whole number line: at zero and at infinity. • 28 This post got out of sequence. I put the text in my next post. • 4.2k (1) What math book is that? What is the context? What does the variable 'u' range over? What specific operation does '+' stand for? A very simple text. I'm quite certain there's very little ambiguity with the concepts I used. 2) There is no mathematical object named 'infinity Axiom of infinity? (3) Your "nothing = infinity" is just wordplay. How is it "wordplay"? The solution set for a = a + 1 is the empty set { } with no members. In different words a is NOTHING, not even zero NOTHING = NOTHING + 1 INFINITY = INFINITY + 1 Oh I see now. They may not be the same thing but just two different objects that behave in the same way. Thanks. • 4.2k From what axioms, definitions, and rules of inference do you argue that? Sorry if this puts you off but what axioms would be necessary for the existence of natural numbers and the basic mathematical operations of + and ×? I begin from these • 803 I wrote "isomorphism" in scare quotes because I don't mean an actual function. I mean that tuples and sequences are "isomorphic" in that you can recover the order from one to the other and vice versa. This can be expressed exactly, but it's a lot of notation to put into posts such as these. Anyway, the general idea is obvious and used in mathematics extensively. Oh, yes, agreed, it slipped my mind. It is indeed not just a set. Unlike in sets, the actual order of elements is also a piece of information that sequences and tuples carry. So, it is indeed more than a mapping between orderless sets. • 28 A very simple text. What is the name and author of the text? Axiom of infinity? The axiom of infinity is not a mathematical object named 'infinity'. Moreover, the axiom of infinity itself is a finite mathematical object, as it is a finite string of symbols in a formal language. How is it "wordplay"? I explained explicitly in my post. Oh I see now. They may not be the same thing but just two different objects that behave in the same way. No, you just made the same mistake I pointed out the first time. what axioms would be necessary for the existence of natural numbers and the basic mathematical operations of + and ×? I begin from these There are lots of different axiom systems for such things. For example, set theory. The existence of natural numbers is proven in set theory (even without the axiom of infinity). The existence of the set of natural numbers is proven in set theory (with the axiom of infinity). The operations of addition and multiplication are also definable and proven to exist in set theory. Set theory proceeds from formal axioms, formal definitions, and formal rules of inference. Your argument has no apparent basis in those axioms, definitions, and rules. So I ask you what, exactly, are your axioms, definitions, and rules. Without specifying them, your argument, using such verbiage as "this pattern suggests" and then the non sequitur "in other words there is a largest natural number" is nonsensical handwaving, also known as 'waffle'. Moreover, not just axioms, but ordinary mathematical common sense endows us with the understanding that there is no greatest natural number. Suppose there were a greatest natural number n. Then n+1 is greater than n. So n is not, after all, a greatest natural number. • 803 This can be expressed exactly, but it's a lot of notation to put into posts such as these. I've got a question about infinite cardinalities. The following set of sets is an element of the powerset of real numbers: {{1.2323,343.3333},{344.2,0,34343.444,6454.6444},{2323.11,834.33},{},{5 12.1,99.343433}} So, any language expression that matches only this kind of stuff, would be the membership function for a set of which the cardinality would be the powerset of real numbers, i.e. beth2. Now, regular languages cannot match wellformedness. So, things like matching embedded braces { } is out of the question. But I just concocted a set notation that does not use wellformedness: [ 1.2323 343.3333 344.2 0 34343.444 6454.6444 2323.11 834.33 5 12.1 99.343433 ] It is the same information as above, but in another notation. This notation is regular and can be successfully matched by a regular expression. I tried it at the test site https://regex101.com. The regex looks like this: $\n((\d*(\.\d*)? ?)*\n?)*$ Since this expression successfully matches sets of sets of real numbers, can I say that it is the membership function of a set with cardinality beth2, i.e. 2^2^beth0 ? If that makes sense, then it would be a witness to the claim that regular expressions can describe sets of which the cardinality exceeds that of the continuum, i.e. uncountable infinity. • 28 The following set of sets is an element of the powerset of real numbers: No, that set is a member of the power set of the power set of the set of real numbers. And I don't understand the rest of your post, starting with "any language expression that matches only this kind of stuff, would be the membership function for a set of which the cardinality would be the powerset of real numbers" bold italic underline strike code quote ulist image url mention reveal
2019-11-22 23:22:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7778229713439941, "perplexity": 566.8724890547509}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496672170.93/warc/CC-MAIN-20191122222322-20191123011322-00505.warc.gz"}
https://socratic.org/questions/100-of-what-number-is-70
# 100% of what number is 70? 100% " of " 70 = 70 100% means that same as $\frac{100}{100} = 1$ 100% means the full amount, or the whole total. So 100% " of " 70 = 70
2019-12-12 06:36:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5648831129074097, "perplexity": 4353.456136207371}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540537212.96/warc/CC-MAIN-20191212051311-20191212075311-00442.warc.gz"}
https://telescoper.wordpress.com/2010/05/25/water-and-energy/
## Water and Energy I’ve refrained from blogging about the fraught history of my attempts to have a new  gas boiler installed in my house. Today, however, at last I have finally succeed in getting a state-of-the-art high-efficiency condensing contraption fit for the 21st Century, which will hopefully save me a few bob in gas bills over the winter but, more importantly, actually produce hot water for more than a minute or so without switching itself off. The chaps that did the job for me actually had to test all the radiators too, which meant switching them all up to maximum. It wasn’t quite as hot today as it was yesterday but nevertheless the inside of the house was like a Turkish bath for a while. I therefore sat outside in the Sun for a bit waiting for them to get finished and tidy everything up. While I was sitting there I got thinking about sustainable energy and so on, and was reminded of a comment Martin Rees made in his Reith Lecture not long ago. Wanting to sound positive about renewable energy he referred to the prospect of generating significant tidal power using a Severn Barrage. Given the local relevance to Cardiff – one of the main ideas is a barrage right across the Severn Estuary from Cardiff to Weston-super-Mare -so he presumably thought he was on safe ground mentioning it. In fact there was a lot of uneasy shuffling in seats at that point and the question session at the end generated some tersely sceptical comments. Many locals are not at all happy about the possible environmental impact of the Severn Barrage. That, and the cost – probably in excess of £20 billion – has to be set against the fact that such a barrage could in principle generate 2GW average power from an entirely renewable source. This would reduce our dependence on fossil fuels and increase our energy security too. The resources probably aren’t available right now given the parlous state of the public finances, but I’m glad that the Welsh Assembly Government is backing serious study of the various options. It may be that it won’t be long before we’re forced to think about it anyway. The Wikipedia page on the various proposals for a Severn Barrage is very comprehensive, so I won’t rehearse the arguments here. In any case, I’m no engineer and can’t comment on the specifics of the technology required to construct, e.g., a tidal-stream generator. However, I have to say that I find the idea pretty compelling, provided ways can be found to mitigate its environmental impact. For a start it’s instructive to look at turbine-generated power. Wind turbines  are cropping up around the British isles, either individually or in wind farms. A  typical wind turbine can generate about 1MW in favourable weather conditions, but it needs an awful lot of them to produce anything like the power of a conventional power station. They’re also relatively unpredictable so can’t be relied upon on their own for continuous power generation. The power $P$ available from a wind turbine is given roughly by $P \simeq \frac{1}{2} \epsilon \rho A v^3$ where $v$ is the wind speed, $A$ is the area of the turbine, $\rho$ is the density of air, which is about 1.2 kg per cubic metre, and $\epsilon$ is the efficiency with which the turbine converts the kinetic energy of the air into useable electricity. The same formula would apply to a turbine placed in water, immediately showing the advantage of tidal power.  For comparable efficiencies and sizes the ratio of power generated in a tidal-stream turbine to a wind turbine would be $\frac{P_{t}}{P_{w}}\simeq \frac{\rho_{t}}{\rho_{w}} \left( \frac{v_{t}}{v_{w}}\right)^{3}$ The speed of the water in a tidal stream can be comparable to the airspeed in a moderate wind, in which case the term in brackets doesn’t matter and it’s just the ratio of the densities of water and air that counts, and that’s a large number! Of course wind speed can sometimes be larger than the fastest tidal current, but wind turbines don’t work efficiently in such conditions and in any case it isn’t the $v$ which provides the killer factor. The density of sea water is about 1025 kg per cubic metre, a thousand times greater than that of air. To get the same energy output from air as from a tidal stream you would need to have winds blowing steadily ten times the velocity of the stream, which would be about 80 knots for the Severn. More than breezy! Not all proposals for the Severn Barrage involve tidal stream turbines. Some exploit the gravitational potential energy rather than the kinetic energy of the water by exploiting the vertical rise and fall during a tidal cycle rather than the horizontal flow. The energy to be exploited in, for example, a tidal basin of area $A$  would go as $E \simeq \frac{1}{2} \epsilon A\rho gh^{2}$ where $h$ is the vertical tidal range, about 8 metres for the Severn Estuary, and $g$ is the acceleration due to gravity. The average power generated would be found by dividing this amount of energy by 12 hours, the time between successive high tides. It remains to be seen whether tidal basin or lagoon based on this principle emerges as competitive. Another thing that struck me doodling these things on the back of an envelope in the garden is that this sort of thing is what we should be getting physics students to think about. I’m quite ashamed to admit that we don’t… ### 14 Responses to “Water and Energy” 1. […] This post was mentioned on Twitter by Sherry Driedger and Chattertrap Climate, Peter Coles. Peter Coles said: Water and Energy: http://wp.me/pko9D-1yg […] 2. Rhodri Evans Says: Peter – very interesting blog. Yes it is shameful that we don’t get our physics students thinking about sustainable forms of energy generation more. I should put you in touch with Jim Poole at the environment agency (where Maggie happens to work). I’ve chatted to him about sustainability before at some length. Apparently he gives a lecture to the first year engineering students each year and told me last year that he was keen to do something similar to the physics students. It would be great to set up. He’s the EA’s expert on sustainability including the proposed Severn barrage. 3. telescoper Says: Rhodri, That’s a great idea! One first-year lecture wouldn’t do everything I’d like to do, but it would be a start! Peter 4. Bryn Jones Says: This is, of course, the kind of analysis that can be carried out in physics tutorials to give students examples of scientific reasoning beyond the scope of the material covered in their lecture modules. On the issue of the role of physics in understanding human interaction with the environment specifically, I recall Mike Disney arguing when the Cardiff physics department was aiming to recruit a new chair in the mid 1990s that it should advertise a chair in environmental physics. Peter will remember that the Nottingham physics degree course had a module in environmental physics at one time (a frst-year module if I recall correctly). 5. telescoper Says: Bryn, We’ve just advertised a chair and several lectureships in physics and group in an area generally related to environmental physics could well be what comes out of that. Also the AIG is trying to move into earth sensing instrumentation instead of space astronomy, which would also be a move in that direction. Yes, I do remember the environmental physics module at Nottingham. I don’t know whether it’s still going but it was a good idea. I felt that it would be better a bit later on, though, when the students knew a bit more physics. We’re currently doing a course review here and I’ll be pushing for an environmental physics module, but it would probably be an option. I’d have no problem putting this kind of thing alongside nuclear (fission and fusion) in a general module on energy, for example, but these require quite a lot of advanced physics to do properly. However, I think we could easily generate problems for exercise classes generally that look at issues like this. It doesn’t have to be done in a separate module, but could instead be embedded more throughout the syllabus. Peter 6. S Jones Says: It’s becoming pretty famous now, but should mention the book Sustainable Energy without the Hot Air by David Mackay http://www.withouthotair.com/ Mackay is a professor of physics at Cambridge University and in the last third of the book gets into the nitty gritty of the physics of how much energy can be produced by solar, geothermal etc. Would get my vote to be on any reading list for a course teaching energy to phys students. 7. Bryn Jones Says: I wasn’t aware that David Mackay’s book was available online. Many thanks to S. Jones for pointing this out. 8. telescoper Says: Indeed. Mike Edmunds showed me David Mackay’s book on the way back from the RAS Club the other night. I thought it looked really good, but I promptly forgot about it. Thanks to Jones the reminder for the, er, reminder. 9. S Jones Says: I’m a huge fan of this book. I particularly like how he puts all units into kilowatt hours kilowatt hours per day kilowatt hours per day per person One of the tragedies of getting to grips with sustainable energy is that for historical reasons it has so many units. A lot of books and reports would be much simpler but for the kwh per year, calories, kcals, BTUs, exajoules, mega tonne of oil equivalent (Mtoe), quads, metres cubed of gas that one must wade through. Some things that would be as plain as day it we just had one unit require several pages of calculations to discover. eg. – how does all the heat pumped out by all the power stations, chimneys and campfires in the world compare to the heat being trapped by the greenhouse effect caused by those activities? Mackay solves this in a paragraph (its about 10% – the CO2 is far more important) – does it take more energy to cook a meal than is contained in it? Yes, sometimes. My dinner of chicken, potato and veg was 500calories or 0.6kwh but it took 0.8kwh to cook. Makes me wonder why I bothered. 10. telescoper Says: You mean he doesn’t use natural units? 11. S Jones Says: lol – I predicted that was coming!
2021-05-16 12:43:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5402097105979919, "perplexity": 1160.3427488066943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991269.57/warc/CC-MAIN-20210516105746-20210516135746-00512.warc.gz"}
http://www.physicsforums.com/showthread.php?p=1200216
Recognitions: Gold Member ## Alternative theories being tested by Gravity probe B Indeed, Garth. Let the data speak for itself. I do not lean either way, and I am certain you feel the same way. It will be difficult to sieve through the data . . . I hope you will be critical of that process. Has anyone done a parameterized post-Newtonian analysis? Can one express the expected results in terms of the usual Eddington alpha, beta gamma and higher order parameters? Any refs? Best, Jim I should have googled first. Apparently it tests gamma and alpha-one ( a non-conservative parameter), according to Will. No doubt that is why Nordstrom thinks the money has been wasted, as gamma has already been strongly constrained and most people believe in the conservation laws. Best, Jim Recognitions: Gold Member Quote by jgraber I should have googled first. Apparently it tests gamma and alpha-one ( a non-conservative parameter), according to Will. No doubt that is why Nordstrom thinks the money has been wasted, as gamma has already been strongly constrained and most people believe in the conservation laws. Best, Jim You'll find quite an exchange on the use of the word(s) "believe" (actually belief) in the dark matter, dark energy & gravity thread! The fact that other viable alternative gravitational/cosmological theories are also being tested by GP-B, such as SCC, makes the enterprise worthwhile. This is especially so in the light of persistent problems with the standard model, even if we gloss over the fact that the Higgs boson/inflaton, the DM particle and DE have not been identified in the laboratory. A recent paper examines a link between DM and baryonic matter Cold Dark Matter as Compact Composite Objects Some of the observations that may be in conflict with the standard viewpoint are: • The density profile is too cuspy, [4], [5], [6]. The disagreement of the observations with high resolution simulations is alleviated with time, but some questions still remain [5], [6]. • The number of dwarf galaxies in the Local group is smaller than predicted by CCDM simulations, [4], [5], [6]. This problem is also becoming less dramatic with time [5], [6]. • CCDM simulations produce galaxy disks that are too small and have too little angular momentum, [4], [5], [6]; • There is a close relation between rotation curve shape and light distribution. This implies that there is a close coupling between luminous and dark matter which is difficult to interpret, see e.g. [7]; • There is a correlation in early-type galaxies supporting the hypothesis that there is a connection between the DM content and the evolution of the baryonic component in such systems, see e.g.[8]; • The order parameter (either the central density or the core radius) correlates with the stellar mass in spirals[9]. This suggests the existence of a well-defined scale length in dark matter haloes, linked to the luminous matter, which is totally unexpected in the framework of CDM theory, but could be a natural consequence of DM and baryon interaction. • There is a mysterious correlation between visible and DM distributions on log−log scale, which is very difficult to explain within the standard CCDM model [10]; • A recent analysis of the CHANDRA image of the galactic center finds that the intensity of the diffuse X-ray emission significantly exceeds the predictions of a model which includes known Galactic sources [11]. The spectrum is consistent with hot 8 KeV spatially uniform plasma. The hard X-rays are unlikely to result from undetected point sources, because no known population of stellar objects is numerous enough to account for the observed surface brightness. It also seems that an Age Problem is raising its head again as observations of old evolved objects are being made at z > 4. All the more reason to keep an open mind and continue to confirm our "beliefs" with experimental verification. We live in interesting times! Garth Recognitions: Gold Member Quote by jgraber Has anyone done a parameterized post-Newtonian analysis? Can one express the expected results in terms of the usual Eddington alpha, beta gamma and higher order parameters? Any refs? Best, Jim Try Will's: The Confrontation between General Relativity and Experiment or my: Resolving the Degeneracy: Experimental tests of the New Self Creation Cosmology and a heterodox prediction for Gravity Probe B for an alternative model. They both use the parameterized post-Newtonian (PPN) analysis. Garth Recognitions: Gold Member Halfway through Phase II! The latest release from the Gravity Probe B website. GP-B DATA ANALYSIS & RESULTS ANNOUNCEMENT STATUS During the 50-week science phase of the GP-B mission and the 7-week instrument calibration phase, which lasted from August 2004 - Septermber 2005, we collected over a terabyte of experimental data. Analysis has been progressing through a 3-phase plan, each subsequent phase building on those preceding it. In Phase I, which lasted from the end of September 2005 through February 2006, the analysis focused on a short term—day-by-day or even orbit-by-orbit—examination of the data. The overall goals of this phase were to optimize the data analysis routines, calibrate out instrumentation effects, and produce initial "gyro spin axis orientation of the day" estimates for each gyro individually. At this stage, the focus was on individual gyro performance; there was no attempt to combine or compare the results of all four gyros, nor was there even an attempt to estimate the gyro drift rates. We are currently progressing through Phase II of the data analysis process, which began at the beginning of March and is scheduled to run through mid-August 2006. During Phase II, our focus is on understanding and compensating for certain long-term systematic effects in the data that span weeks or months. The primary products of this phase will be monthly spin axis drift estimates for each gyro, as well as refined daily drift estimates. In this phase, the focus remains on individual gyro performance. In Phase III, which is scheduled to run from late August 2006 through December 2006, data from all four gyros will be integrated over the entire experiment. The results of this phase will be both individual and correlated gyro drift rates covering the entire 50-week experimental period for all four gyros. These results will be relative to the position of our guide star, IM Pegasi, which changed continually throughout the experiment. Thus, the final step in the analysis, currently scheduled to occur in January 2007, will be to combine our gyro drift results with data mapping the proper motion of IM Pegasi relative to the unchanging position of a distant quasar. The proper motion of IM Pegasi has been mapped with unprecedented precision using a technique called Very Long Baseline Interferometry (VLBI) by Irwin Shapiro and his team at the Harvard-Smithsonian Center for Astrophysics (CfA), in collaboration with Norbert Bartel at York University in Toronto and French astronomer Jean-Francois Lestrade. Playing the role of our own harshest critic, our science team will then perform a careful and thorough final review of the analysis and results, checking and cross-checking each aspect to ensure the soundness of our procedures and the validity of our outcomes. We will then turn the analysis and results over to our GP-B Science Advisory Committee (SAC), that has been closely monitoring our experimental methods, data analysis procedures, and progress for 11 years, to obtain its independent review. In addition, we will seek independent reviews from a number of international experts. Throughout phases II and III, members of our team will be preparing scientific and engineering papers for publication in late 2006-2007. At the same time, we will be working with NASA to plan a formal public announcement of the results of this unprecedented test of General Relativity. We expect to make this announcement of the results in April 2007. Less than a year to go and counting! Garth Recognitions: Homework Help Science Advisor Thanks for the list of off Broadway gravitation theories. Here's another, alternative theorist with a prediction (0.000): http://www.mass-metricgravity.net/ By the way, I'm working on a flat space gravitation simulator. My original purpose was to show how standard GR differed from the Cambridge gauge gravity version of GR. The Cambridge guys say that their version works on flat space and test particles therefore cross the event horizon in finite coordinate time. Their website is http://www.mrao.cam.ac.uk/~clifford/ . For reasons having to do with elementary particles, I find the Cambridge theory convincing, and I thought an animation showing the GR particles getting stuck on the event horizon while the Cambridge particles went on through to the singularity would be convincing. Now so far I've only got the Newtonian gravity running: http://www.gaugegravity.com/testappl...etGravity.html but I should get GR running this weekend, and the Cambridge version (which amounts to allowing a non diagonal metric) soon after. Where this all gets back to this forum is that I would like to include as many gravity theories as possible, and you've listed quite a few. In order for a theory to be used, I have to be able to write the acceleration in terms of position and velocity. Carl Quote by Garth The Gravity Probe B satellite has placed four (over redundant) gyroscopes in low polar Earth orbit to primarily test two predictions of General Relativity. The first effect being tested is (for the GP-B polar orbit) a N-S geodetic precession, caused by the amount a gyro 'leans' over into the slope of curved space. The second effect being tested is the E-W frame-dragging, Lense-Thirring, or gravitomagnetic effect, caused by the spinning Earth dragging space-time around with it. Some researchers, such as Kenneth Nordtvedt, have said that the experiment was worth doing when it was first proposed but that now GR has been verified beyond resonable doubt the result of GP-B is a foregone conclusion. I have now discovered several theories competing with General Relativity(GR) that are being tested and falsified by this experiment: my Self Creation Cosmology).(SCC), Moffat's Nonsymmetric Gravitational Theory (NGT), Hai-Long Zhao's mass variance SR theory (MVSR), Stanley Robertson's Newtonian Gravity theory (NG), and Junhao & Xiang's Flat space-time theory (FST). As the results will be published in the not too distant future they could be interesting!! (Note if anybody knows of any other theories with alternative predictions for GP-B please post them as well for comparison.) 1. GPB Geodetic precession GR = 6.6144 arcsec/yr SCC = 4.4096 arcsec/yr NGT = 6.6144 - a small $\sigma$ correction arcsec/yr MVSR = 6.6144 arcsec/yr NG = 1.6536 arcsec/yr FST = 4.4096 arcsec/yr 2. GPB gravitomagnetic frame dragging precession GR = 0.0409 arcsec/yr SCC = 0.0409 arcsec/yr NGT = 0.0409 arcsec/yr MVSR = 0.0102 arcsec/yr NG = 0.0102 arcsec/yr FST = 0.0000 arcsec/yr I cannot vouch for these other theories, they may well be considered 'crackpot' by some, however all these theories have the advantage, together with GR, that they are able to be falsified by the GP-B results. We continue to wait and see! Garth Mass-metric relativity is a scalar theory of gravity, and is based on the increase of mass with speed and with gravitational potential. Its predictions for the gpb are: geodetic rate -6.56124 arcsec/yr. Note the sign, indicating that the precession is backward instead of forward as in GR. Lense-Thirring rate -.01924 arcsec/yr. Actually, the Lense-Thirring rate is zero but a geodetic perturbation caused by the yearly orbit of earth about the sun induces a geodetic precession in the opposite direction. Let the experiment decide. A basic paper on mass-metric relativity is the lasl arXiv 0012059 paper, by R.L. Collins. R.L. Collins Recognitions: Homework Help Science Advisor Professor Collins, Please allow me to be the first to welcome you to physics forums. Here are links to your three very fascinating papers on gravitation, in the order I think they should be read: Changing Mass Corrects Newtonian Gravity Newton's inverse-square law of universal gravitation assumes constant mass. But mass increases with speed and perhaps with gravity. By SR, mass is increased over the rest mass by gamma. Rest mass is here postulated to increase under gravity, by $$1/\alpha =1+GM/rc^2$$. We examine the consequences of introducing this changing mass into Newton's law in flat spacetime. This variable mass affects the metric, relative to an observer away from the influence of gravity, contracting both lengths and times (as measured) by alpha/gamma. The gravitational force, as in orbital calculations, differs from Newton's law by the factor $$(\gamma/\alpha)^3$$, and is not quite inverse square. Without adjustable parameters, this accounts fully for the classical tests of GR. The postulated "fifth force" appears at the $$10^-9$$ g level. Gravitationally-influenced space remains Euclidean, but the mass-metric changes make it seem curved when measured. http://www.arxiv.org/abs/physics/0012059 SN1a Supernova Red Shifts http://www.arxiv.org/abs/physics/0101033 The shrinking Hubble constant http://www.arxiv.org/abs/physics/0601013 By the way, I've just got a first cut of a GR simulating program done. I'm not very sure of it, but it seems like it works okay (but I'm not much of a gravity guy): http://www.gaugegravity.com/testappl...etGravity.html I've set the initial conditions to illustrate a fairly extreme case of precession. When I get this program running satisfactorily, I will include your equation of motion. I can hardly wait, but ethanol is keeping me busy right now. Carl Recognitions: Gold Member Quote by rusty Mass-metric relativity is a scalar theory of gravity, and is based on the increase of mass with speed and with gravitational potential. Its predictions for the gpb are: geodetic rate -6.56124 arcsec/yr. Note the sign, indicating that the precession is backward instead of forward as in GR. Lense-Thirring rate -.01924 arcsec/yr. Actually, the Lense-Thirring rate is zero but a geodetic perturbation caused by the yearly orbit of earth about the sun induces a geodetic precession in the opposite direction. Let the experiment decide. A basic paper on mass-metric relativity is the lasl arXiv 0012059 paper, by R.L. Collins. R.L. Collins Thank you rusty, the line up is now: Note: 1. The first effect being tested is (for the GP-B polar orbit) a N-S geodetic precession, caused by the amount a gyro 'leans' over into the slope of curved space. 2. The second effect being tested is the E-W frame-dragging, Lense-Thirring, or gravitomagnetic effect, caused by the spinning Earth dragging space-time around with it. Einstein's General Relativity(GR) Barber's Self Creation Cosmology).(SCC), Moffat's Nonsymmetric Gravitational Theory (NGT), Hai-Long Zhao's mass variance SR theory (MVSR), Stanley Robertson's Newtonian Gravity theory (NG), and Junhao & Xiang's Flat space-time theory (FST). R. L. Collin's Mass-metric relativity (MMR) The predictions are: 1. GPB Geodetic precession GR = 6.6144 arcsec/yr SCC = 4.4096 arcsec/yr NGT = 6.6144 - a small $\sigma$ correction arcsec/yr MVSR = 6.6144 arcsec/yr NG = 1.6536 arcsec/yr FST = 4.4096 arcsec/yr MMR = -6.56124 arcsec/yr 2. GPB gravitomagnetic frame dragging precession GR = 0.0409 arcsec/yr SCC = 0.0409 arcsec/yr NGT = 0.0409 arcsec/yr MVSR = 0.0102 arcsec/yr NG = 0.0102 arcsec/yr FST = 0.0000 arcsec/yr MMR = -0.01924 arcsec/yr Garth Recognitions: Gold Member Quote by rusty Mass-metric relativity is a scalar theory of gravity, and is based on the increase of mass with speed and with gravitational potential. Its predictions for the gpb are: geodetic rate -6.56124 arcsec/yr. Note the sign, indicating that the precession is backward instead of forward as in GR. Lense-Thirring rate -.01924 arcsec/yr. Actually, the Lense-Thirring rate is zero but a geodetic perturbation caused by the yearly orbit of earth about the sun induces a geodetic precession in the opposite direction. Let the experiment decide. A basic paper on mass-metric relativity is the lasl arXiv 0012059 paper, by R.L. Collins. R.L. Collins rusty has MMR been published in a peer reviewed journal? If not you can publish it here in the Independent Research Forum and we can discuss it. Garth Recognitions: Gold Member Now into Phase III of the data analysis of Gravity Probe B. We are now beginning Phase III—the final phase-of the data analysis—which will last until January-February, 2007. Whereas in Phases I and II the focus was on individual gyro performance, during Phase III, the data from all four gyros will be integrated over the entire experiment. The results of this phase will be both individual and correlated changes in gyro spin axis orientation covering the entire 50-week experimental period for all four gyros. These results will be relative to the position of our guide star, IM Pegasi, which changed continually throughout the experiment. Thus, the final step in the analysis, currently scheduled to occur early in the spring of 2007, will be to combine our gyro spin axis orientation results with data mapping the proper motion of IM Pegasi relative to the unchanging position of a distant quasar. The proper motion of IM Pegasi has been mapped with unprecedented precision using a technique called Very Long Baseline Interferometry (VLBI) by Irwin Shapiro and his team at the Harvard-Smithsonian Center for Astrophysics (CfA), in collaboration with Norbert Bartel at York University in Toronto and French astronomer Jean-Francois Lestrade. At the end of Phase III, playing the role of our own harshest critic, our science team will then perform a careful and thorough final review of the analysis and results, checking and cross-checking each aspect to ensure the soundness of our procedures and the validity of our outcomes. We will then turn the analysis and results over to the SAC, which has been closely monitoring our experimental methods, data analysis procedures, and progress for 11 years, to obtain its independent review. Moreover, we will seek independent reviews from a number of international experts. In addition to analyzing the data, members of our team are now in the process of preparing scientific and engineering papers for publication in late 2006-2007. We have also begun discussions with NASA to plan a formal public announcement of the results of this unprecedented test of General Relativity. We expect to make this announcement of the results in April 2007. Still April 2007, and counting! Garth Recognitions: Gold Member Science Advisor While we are waiting you may be interested in Francis Everitt's lecture:Testing Einstein in Space: The Gravity Probe B Mission dated 18 May 2006. Garth Recognitions: Gold Member Gravity Probe B Update -- December 22, 2006 ============== GP-B MISSION NEWS ============== A recent story about GP-B in Nature ========================= The December 21-28 2006 issue of Nature (v. 444, p. 978-979) contains a short news article stating that Nature has learned that "two unanticipated effects are clouding the [GP-B] team's frame-dragging results" and also that "results were expected by last summer but the announcement never came." The two issues referred to in Nature have been regularly reported to NASA and our GP-B Science Advisory Committee (SAC) and publicly via these status updates. They are: 1) The effect of polhode motion of the gyros on readout calibration (see the polhode story in last month's update, http://einstein.stanford.edu/highlig...ode_story.html) and 2) misalignment torques observed and calibrated during the post-science instrument calibration phase in August-September 2005 (see the four weekly updates of September 2005, http://einstein.stanford.edu/highlig...indexmain.html. In August 2005, a three-phase data analysis plan was devised in order to properly handle these and other issues. As first reported in May 2006, our intent--reached in agreement with NASA--has been to make the first science announcement in April 2007. This is still our plan. If you want to know more about the Polhode motion see Polhode Behavior in GP-B’s Gyros Roll on April! Garth Quote by Garth The Gravity Probe B satellite has placed four (over redundant) gyroscopes in low polar Earth orbit to primarily test two predictions of General Relativity. The first effect being tested is (for the GP-B polar orbit) a N-S geodetic precession, caused by the amount a gyro 'leans' over into the slope of curved space. The second effect being tested is the E-W frame-dragging, Lense-Thirring, or gravitomagnetic effect, caused by the spinning Earth dragging space-time around with it. Some researchers, such as Kenneth Nordtvedt, have said that the experiment was worth doing when it was first proposed but that now GR has been verified beyond resonable doubt the result of GP-B is a foregone conclusion. I have now discovered several theories competing with General Relativity(GR) that are being tested and falsified by this experiment: my Self Creation Cosmology).(SCC), Moffat's Nonsymmetric Gravitational Theory (NGT), Hai-Long Zhao's mass variance SR theory (MVSR), Stanley Robertson's Newtonian Gravity theory (NG), and Junhao & Xiang's Flat space-time theory (FST). As the results will be published in the not too distant future they could be interesting!! (Note if anybody knows of any other theories with alternative predictions for GP-B please post them as well for comparison.) 1. GPB Geodetic precession GR = 6.6144 arcsec/yr SCC = 4.4096 arcsec/yr NGT = 6.6144 - a small $\sigma$ correction arcsec/yr MVSR = 6.6144 arcsec/yr NG = 1.6536 arcsec/yr FST = 4.4096 arcsec/yr 2. GPB gravitomagnetic frame dragging precession GR = 0.0409 arcsec/yr SCC = 0.0409 arcsec/yr NGT = 0.0409 arcsec/yr MVSR = 0.0102 arcsec/yr NG = 0.0102 arcsec/yr FST = 0.0000 arcsec/yr I cannot vouch for these other theories, they may well be considered 'crackpot' by some, however all these theories have the advantage, together with GR, that they are able to be falsified by the GP-B results. We continue to wait and see! Garth Thanks Garth for this interesting overview. I printed out your table and will look at it again when the Gravity Probe B results are available . What is actually the main motivation for inventing alternative theories to GR ? What are their main "advantages" ? Recognitions: Gold Member Quote by notknowing What is actually the main motivation for inventing alternative theories to GR ? What are their main "advantages" ? First to 'push the envelope', the concept of scientific truth is that it is a process, one never should believe that the 'final truth' has been found but that the present best theories are always open to experimental testing and theoretical questioning. Viable alternative theories are important to test the standard theory against, partly to justify and motivate such difficult experiments as Gravity Probe B. As I said in your quote "Some researchers, such as Kenneth Nordtvedt, have said that the experiment was worth doing when it was first proposed but that now GR has been verified beyond reasonable doubt the result of GP-B is a foregone conclusion." The existence of these other theories argues for a more positive attitude to the experiment. There are always questions to be asked of the standard theory that other approaches seek to answer. The main questions about the standard $\Lambda$CDM model IMHO are its necessity to invoke Inflation, exotic non-baryonic DM and DE, while the Higgs Boson/Inflaton the DM particle(s) and DE have not been discovered in laboratory experiments. The existence of the PA and other anomalies are also intriguing. Different alternative theories have different advantages, but to be viable contenders they must not only predict accurately the outcomes of all the experiments and observations predicted by the standard theory but also have a greater explanatory power by doing so more simply. Garth Dear all, Just to mention that there is another alternative theory of gravity (mine: gr-qc/0610079) with predictions different from the ones that you have listed. This is a DArk Gravity theory: DG predicts: 1) The same geodetic effect as in GR 2) No frame dragging 3) A small (but hopefully within the GP-B accuracy) angular deviation during the year but with a one year period (related to the the speed of earth about the sun). regards, F H-C
2013-05-25 01:39:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44961753487586975, "perplexity": 1929.0847427598865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705310619/warc/CC-MAIN-20130516115510-00058-ip-10-60-113-184.ec2.internal.warc.gz"}
https://bcallaway11.github.io/did/articles/pre-testing.html
Introduction This vignette provides a discussion of how to conduct pre-tests in DiD setups using the did package • One appealing feature of many DiD applications with multiple periods is that the researcher can pre-test the parallel trends assumptions. • The idea here is simple: although one can not always test whether parallel trends itself holds, one can check if it holds in periods before treated units actually become treated. • Importantly, this is just a pre-test; it is different from an actual test. Whether or not the parallel trends assumption holds in pre-treatment periods does not actually tell you if it holds in the current period (and this is when you need it to hold!). It is certainly possible for the identifying assumptions to hold in previous periods but not hold in current periods; it is also possible for identifying assumptions to be violated in previous periods but for them to hold in current periods. That being said, we view the pre-test as a piece of evidence on the credibility of the DiD design in a particular application. • In this vignette, we demonstrate that the approach used in the did package for pre-testing may work substantially better than the more common “event study regression”. Common Approaches to Pre-Testing in Applications By far the most common approach to pre-testing in applications is to run an event-study regression. Here, the idea is to run a regression that includes leads and lags of the treatment dummy variable such as $Y_{it} = \theta_t + \eta_i + \sum_{l=-\mathcal{T}}^{\mathcal{T}-1} D_{it}^l \mu_l + v_{it}$ where $$D_{it}^l = 1$$ if individual $$i$$ has been exposed to the treatment for $$l$$ periods in period $$t$$, and $$D_{it}^l = 0$$ otherwise. To be clear here, it is helpful to give some examples. Suppose individual $$i$$ becomes treated in period 3. Then, • $$D_{it}^0 = 1$$ when $$t=3$$ and is equal to 0 in other time periods • $$D_{it}^2 = 1$$ when $$t=5$$ and is equal to 0 in other time periods • $$D_{it}^{-2} = 1$$ when $$t=1$$ and is equal to 0 in other time periods. And $$\mu_l$$ is interpreted as the effect of treatment for different lengths of exposure to the treatment. Typically, $$\mu_{-1}$$ is normalized to be equal to 0, and we follow that convention here. It is common to interpret estimated $$\mu_l$$’s with $$l < 0$$ as a way to pre-test the parallel trends assumption. Pitfalls with Event Study Regressions Best Case Scenario for Pre-Testing First, let’s start with a case where an event study regression is going to work well for pre-testing the parallel trends assumption # generate dataset with 4 time periods time.periods <- 4 # generate dynamic effects te.e <- time.periods:1 # generate data set with these parameters # (main thing: it generates a dataset that satisfies # parallel trends in all periods...including pre-treatment) data <- build_sim_dataset() #> G X id period Y treat #> 1 2 0.530541 1 1 4.264584 1 #> 8001 2 0.530541 1 2 10.159789 1 #> 16001 2 0.530541 1 3 9.482624 1 #> 24001 2 0.530541 1 4 10.288278 1 #> 2 4 1.123550 2 1 3.252148 1 #> 8002 4 1.123550 2 2 7.038382 1 The main thing to notice here: • The dynamics are common across all groups. This is the case where an event-study regression will work. Next, a bit more code #----------------------------------------------------------------------------- # modify the dataset a bit so that we can run an event study #----------------------------------------------------------------------------- # generate leads and lags of the treatment Dtl <- sapply(-(time.periods-1):(time.periods-2), function(l) { dtl <- 1*( (data$period == data$G + l) & (data$G > 0) ) dtl }) Dtl <- as.data.frame(Dtl) cnames1 <- paste0("Dtmin",(time.periods-1):1) colnames(Dtl) <- c(cnames1, paste0("Dt",0:(time.periods-2))) data <- cbind.data.frame(data, Dtl) row.names(data) <- NULL head(data) #> G X id period Y treat Dtmin3 Dtmin2 Dtmin1 Dt0 Dt1 Dt2 #> 1 2 0.530541 1 1 4.264584 1 0 0 1 0 0 0 #> 2 2 0.530541 1 2 10.159789 1 0 0 0 1 0 0 #> 3 2 0.530541 1 3 9.482624 1 0 0 0 0 1 0 #> 4 2 0.530541 1 4 10.288278 1 0 0 0 0 0 1 #> 5 4 1.123550 2 1 3.252148 1 1 0 0 0 0 0 #> 6 4 1.123550 2 2 7.038382 1 0 1 0 0 0 0 #----------------------------------------------------------------------------- # run the event study regression #----------------------------------------------------------------------------- # load plm package library(plm) # run event study regression # normalize effect to be 0 in pre-treatment period es <- plm(Y ~ Dtmin3 + Dtmin2 + Dt0 + Dt1 + Dt2, data=data, model="within", effect="twoways", index=c("id","period")) summary(es) #> Twoways effects Within Model #> #> Call: #> plm(formula = Y ~ Dtmin3 + Dtmin2 + Dt0 + Dt1 + Dt2, data = data, #> effect = "twoways", model = "within", index = c("id", "period")) #> #> Balanced Panel: n = 6988, T = 4, N = 27952 #> #> Residuals: #> Min. 1st Qu. Median 3rd Qu. Max. #> -9.8577453 -0.7594999 0.0066022 0.7639995 11.6570179 #> #> Coefficients: #> Estimate Std. Error t-value Pr(>|t|) #> Dtmin3 0.041851 0.070134 0.5967 0.5507 #> Dtmin2 0.023539 0.050044 0.4704 0.6381 #> Dt0 4.012515 0.043824 91.5602 <2e-16 *** #> Dt1 3.005615 0.054473 55.1767 <2e-16 *** #> Dt2 2.098672 0.077281 27.1565 <2e-16 *** #> --- #> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 #> #> Total Sum of Squares: 82617 #> Residual Sum of Squares: 55795 #> R-Squared: 0.32466 #> Adj. R-Squared: 0.099232 #> F-statistic: 2014.84 on 5 and 20956 DF, p-value: < 2.22e-16 #----------------------------------------------------------------------------- # make an event study plot #----------------------------------------------------------------------------- # some housekeeping for making the plot # add 0 at event time -1 coefs1 <- coef(es) ses1 <- sqrt(diag(summary(es)$vcov)) idx.pre <- 1:(time.periods-2) idx.post <- (time.periods-1):length(coefs1) coefs <- c(coefs1[idx.pre], 0, coefs1[idx.post]) ses <- c(ses1[idx.pre], 0, ses1[idx.post]) exposure <- -(time.periods-1):(time.periods-2) cmat <- data.frame(coefs=coefs, ses=ses, exposure=exposure) library(ggplot2) ggplot(data=cmat, mapping=aes(y=coefs, x=exposure)) + geom_line(linetype="dashed") + geom_point() + geom_errorbar(aes(ymin=(coefs-1.96*ses), ymax=(coefs+1.96*ses)), width=0.2) + ylim(c(-2,5)) + theme_bw() You will notice that everything looks good here. The pre-test performs well (the caveat to this is that the standard errors are “pointwise” and would be better to have uniform confidence bands though this does not seem to be standard practice in applications). We can compare this to what happens using the did package: # estimate group-group time average treatment effects did_att_gt <- att_gt(yname="Y", tname="period", idname="id", gname="G", data=data, bstrap=FALSE, cband=FALSE) summary(did_att_gt) #> #> Call: #> att_gt(yname = "Y", tname = "period", idname = "id", gname = "G", #> data = data, bstrap = FALSE, cband = FALSE) #> #> Reference: Callaway, Brantly and Pedro H.C. Sant'Anna. "Difference-in-Differences with Multiple Time Periods." Journal of Econometrics, Vol. 225, No. 2, pp. 200-230, 2021. <https://doi.org/10.1016/j.jeconom.2020.12.001>, <https://arxiv.org/abs/1803.09015> #> #> Group-Time Average Treatment Effects: #> Group Time ATT(g,t) Std. Error [95% Pointwise Conf. Band] #> 2 2 4.0048 0.0859 3.8364 4.1732 * #> 2 3 3.0652 0.1129 2.8438 3.2865 * #> 2 4 2.1123 0.1466 1.8249 2.3997 * #> 3 2 -0.0276 0.0491 -0.1237 0.0685 #> 3 3 3.9993 0.1033 3.7968 4.2018 * #> 3 4 2.9554 0.1315 2.6976 3.2132 * #> 4 2 -0.0420 0.0516 -0.1433 0.0592 #> 4 3 0.0016 0.0491 -0.0946 0.0977 #> 4 4 4.0304 0.1369 3.7620 4.2988 * #> --- #> Signif. codes: *' confidence band does not cover 0 #> #> P-value for pre-test of parallel trends assumption: 0.79239 #> Control Group: Never Treated, Anticipation Periods: 0 #> Estimation Method: Doubly Robust # plot them ggdid(did_att_gt) # aggregate them into event study plot did_es <- aggte(did_att_gt, type="dynamic") # plot the event study ggdid(did_es) Overall, everything looks good using either approach. (Just to keep things fair, we report pointwise confidence intervals for group-time average treatment effects, but it is easy to get uniform confidence bands by setting the options bstrap=TRUE, cband=TRUE to the call to att_gt.) Pitfall: Selective Treatment Timing Sun and Abraham (2021) point out a major limitation of event study regressions: when there is selective treatment timing the $$\mu_l$$ end up being weighted averages of treatment effects across different lengths of exposures. Selective treatment timing means that individuals in different groups experience systematically different effects of participating in the treatment from individuals in other groups. For example, there would be selective treatment timing if individuals choose to be treated in earlier periods if they tend to experience larger benefits from participating in the treatment. This sort of selective treatment timing is likely to be present in many applications in economics / policy evaluation. Contrary to event study regressions, pre-tests based on group-time average treatment effects (or based on group-time average treatment effects that are aggregated into an event study plot) are still valid even in the presence of selective treatment timing. To see this in action, let’s keep the same example as before, but add selective treatment timing. # generate dataset with 4 time periods time.periods <- 4 # generate dynamic effects te.e <- time.periods:1 # generate selective treatment timing # (*** this is what is different here ***) te.bet.ind <- time.periods:1 / (time.periods/2) # generate data set with these parameters # (main thing: it generates a dataset that satisfies # parallel trends in all periods...including pre-treatment) data <- build_sim_dataset() # run through same code as in earlier example...omitted # run event study regression # normalize effect to be 0 in pre-treatment period es <- plm(Y ~ Dtmin3 + Dtmin2 + Dt0 + Dt1 + Dt2, data=data, model="within", effect="twoways", index=c("id","period")) summary(es) #> Twoways effects Within Model #> #> Call: #> plm(formula = Y ~ Dtmin3 + Dtmin2 + Dt0 + Dt1 + Dt2, data = data, #> effect = "twoways", model = "within", index = c("id", "period")) #> #> Balanced Panel: n = 6988, T = 4, N = 27952 #> #> Residuals: #> Min. 1st Qu. Median 3rd Qu. Max. #> -10.158205 -0.751216 0.023376 0.805780 11.190180 #> #> Coefficients: #> Estimate Std. Error t-value Pr(>|t|) #> Dtmin3 0.938180 0.071538 13.1144 < 2.2e-16 *** #> Dtmin2 0.424155 0.051047 8.3092 < 2.2e-16 *** #> Dt0 3.655215 0.044701 81.7695 < 2.2e-16 *** #> Dt1 3.199891 0.055563 57.5898 < 2.2e-16 *** #> Dt2 2.610583 0.078828 33.1173 < 2.2e-16 *** #> --- #> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 #> #> Total Sum of Squares: 79441 #> Residual Sum of Squares: 58052 #> R-Squared: 0.26925 #> F-statistic: 1544.28 on 5 and 20956 DF, p-value: < 2.22e-16 # run through same code as before...omitted # new event study plot ggplot(data=cmat, mapping=aes(y=coefs, x=exposure)) + geom_line(linetype="dashed") + geom_point() + geom_errorbar(aes(ymin=(coefs-1.96*ses), ymax=(coefs+1.96*ses)), width=0.2) + ylim(c(-2,5)) + theme_bw() In contrast to the last case, it is clear that things have gone wrong here. Parallel trends holds in all time periods and for all groups here, but the event study regression incorrectly rejects that parallel trends holds – this is due to the selective treatment timing. We can compare this to what happens using the did package: # estimate group-group time average treatment effects did.att.gt <- att_gt(yname="Y", tname="period", idnam="id", gname="G", data=data ) summary(did.att.gt) #> #> Call: #> att_gt(yname = "Y", tname = "period", idname = "id", gname = "G", #> data = data) #> #> Reference: Callaway, Brantly and Pedro H.C. Sant'Anna. "Difference-in-Differences with Multiple Time Periods." Journal of Econometrics, Vol. 225, No. 2, pp. 200-230, 2021. <https://doi.org/10.1016/j.jeconom.2020.12.001>, <https://arxiv.org/abs/1803.09015> #> #> Group-Time Average Treatment Effects: #> Group Time ATT(g,t) Std. Error [95% Simult. Conf. Band] #> 2 2 5.0030 0.0873 4.7710 5.2349 * #> 2 3 4.0634 0.1168 3.7529 4.3738 * #> 2 4 3.1105 0.1487 2.7152 3.5057 * #> 3 2 -0.0276 0.0504 -0.1616 0.1064 #> 3 3 3.9993 0.1015 3.7296 4.2690 * #> 3 4 2.9554 0.1291 2.6124 3.2985 * #> 4 2 -0.0420 0.0519 -0.1800 0.0959 #> 4 3 0.0016 0.0500 -0.1312 0.1343 #> 4 4 2.0335 0.1486 1.6385 2.4284 * #> --- #> Signif. codes: *' confidence band does not cover 0 #> #> P-value for pre-test of parallel trends assumption: 0.79239 #> Control Group: Never Treated, Anticipation Periods: 0 #> Estimation Method: Doubly Robust # plot them ggdid(did.att.gt) # aggregate them into event study plot did.es <- aggte(did.att.gt, type="dynamic") # plot the event study ggdid(did.es) This is the correct performance (up to aforementioned caveats about multiple hypothesis testing). Conditional Moment Tests Another main use case for the did package is when the parallel trends assumptions holds after conditioning on some covariates. This is likely to be important in many applications. For example, to evaluate the effect of participating in a job training program on earnings, it is likely to be important to condition on an individual’s education. This would be true if (i) the distribution of education is different for individuals that participate in job training relative to those that don’t (this is very likely to hold as people that participate in job training tend to have less education than those who do not), and (ii) if the path of earnings (absent participating in job training) depends on an individual’s education. See Heckman, Ichimura, and Todd (1998) and Abadie (2005) for more discussion. Even when one includes covariates to estimate group-time average treatment effects, pre-tests based only on group-time average treatment effects can fail to detect some violations of the parallel trends assumption. To give an example, suppose that the only covariate is binary variable for an individual’s sex. Pre-tests based on group-time average treatment effects could fail to detect violations of the conditional parallel trends assumption in cases where it is violated in one direction for men and in the other direction for women. The did package contains an additional pre-test for the conditional parallel trends assumption in the conditional_did_pretest function. # not run (this code can be substantially slower) reset.sim() set.seed(1814) nt <- 1000 nu <- 1000 cdp <- conditional_did_pretest("Y", "period", "id", "G", xformla=~X, data=data) cdp
2022-05-25 10:45:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28323861956596375, "perplexity": 3696.71834496419}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662584398.89/warc/CC-MAIN-20220525085552-20220525115552-00455.warc.gz"}
http://assert.pub/arxiv/astro-ph/all/
Top 10 Arxiv Papers Today in Astrophysics #1. The hidden giant: discovery of an enormous Galactic dwarf satellite in Gaia DR2 G. Torrealba, V. Belokurov, S. E. Koposov, T. S. Li, M. G. Walker, J. L. Sanders, A. Geringer-Sameth, D. B. Zucker, K. Kuehn, N. W. Evans, W. Dehnen We report the discovery of a Milky-Way satellite in the constellation of Antlia. The Antlia 2 dwarf galaxy is located behind the Galactic disc at a latitude of $b\sim 11^{\circ}$ and spans 1.26 degrees, which corresponds to $\sim2.9$ kpc at its distance of 130 kpc. While similar in extent to the Large Magellanic Cloud, Antlia~2 is orders of magnitude fainter with $M_V=-8.5$ mag, making it by far the lowest surface brightness system known (at $32.3$ mag/arcsec$^2$), $\sim100$ times more diffuse than the so-called ultra diffuse galaxies. The satellite was identified using a combination of astrometry, photometry and variability data from Gaia Data Release 2, and its nature confirmed with deep archival DECam imaging, which revealed a conspicuous BHB signal in agreement with distance obtained from Gaia RR Lyrae. We have also obtained follow-up spectroscopy using AAOmega on the AAT to measure the dwarf's systemic velocity, $290.9\pm0.5$km/s, its velocity dispersion, $5.7\pm1.1$ km/s, and mean metallicity, [Fe/H]$=-1.4$. From these... more | pdf | html Tweets cosmos4u: Astronomy really needed something like that ... an "enormous dwarf galaxy" - Antlia 2 is similar in extent to the Large Magellanic Cloud but orders of magnitude fainter: https://t.co/YfL73W8UZc mmoyr: Just discovered: An enormous galaxy (about twice as big as the moon appears) orbiting the Milky Way. But it's incredibly faint. https://t.co/ftcdw0GJVe kevaba: Astronomers discover an enormous but dim dwarf galaxy orbiting our Milky Way Galaxy, using Gaia data: "The origin of this core may be consistent with aggressive feedback, or may even require alternatives to cold dark matter" https://t.co/J0LuPIKVWd MBKplus: First there was the "feeble giant" (https://t.co/W4q9O7rTII). Now, the "hidden giant" (https://t.co/tUOmCNzQrV). Amazing discoveries from Torrealba et al. of Draco-mass galaxies with sizes that are a factor of 5-10 larger, hidden in the Milky Way! https://t.co/hcWhM2DgTw 8minutesold: A new satellite galaxy of the Milky Way has been discovered using Gaia DR2 data: Antlia 2. It is pretty weird: the lowest-surface brightness satellite known, and apparently living in one of the lowest-density DM halos, too. https://t.co/ZpiL2jOfDl What about VPOS? MOND? Thread👇 https://t.co/GcQMDUGurg Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report the discovery [using a combination of astrometry, photometry and variability] of a Milky-Way satellite in the constellation of Antlia [at a distance of 130 kpc]" https://t.co/O2YYBJGJNa conselice: Interesting result - Gaia has found a dwarf galaxy which is 1.26 deg on the sky - over twice as big as the moon appears. However you'd never see it by eye, or even with deep imaging, given that this has a surface brightness of ~32. https://t.co/mMjx3MaVwi AstroRoque: A new satellite of the Milky Way discovered with @ESAGaia, Antlia 2 dwarf galaxy, sets a new limit for the dimmest and most diffuse system known. https://t.co/T2mLbBrNQh Inferred orbit of Antlia 2, (Figure 8) https://t.co/gbU9W2ruvv neuronomer: Brilliant paper by Torrealba et al. on arXiv today. Discovery of another satellite of the Milky Way! Is it controversial to think that the most convincing of these three plots is the proper-motion one? https://t.co/dURVpD3bIG https://t.co/NjcIe258aU AsteroidEnergy: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th… physicsmatt: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th… CelineBoehm1: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th… kevaba: RT @MBKplus: First there was the "feeble giant" (https://t.co/W4q9O7rTII). Now, the "hidden giant" (https://t.co/tUOmCNzQrV). Amazing disco… fergleiser: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th… nfmartin1980: RT @MBKplus: First there was the "feeble giant" (https://t.co/W4q9O7rTII). Now, the "hidden giant" (https://t.co/tUOmCNzQrV). Amazing disco… Stoner_68: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th… HansPrein: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th… GaiaUB: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th… marianojavierd1: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th… galaxy_map: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th… astroarianna: RT @MBKplus: First there was the "feeble giant" (https://t.co/W4q9O7rTII). Now, the "hidden giant" (https://t.co/tUOmCNzQrV). Amazing disco… xurde69: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th… sergiosanz001: RT @MBKplus: First there was the "feeble giant" (https://t.co/W4q9O7rTII). Now, the "hidden giant" (https://t.co/tUOmCNzQrV). Amazing disco… yshalf: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th… FernRoyal: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th… CosmoCa3sar: RT @MBKplus: First there was the "feeble giant" (https://t.co/W4q9O7rTII). Now, the "hidden giant" (https://t.co/tUOmCNzQrV). Amazing disco… DivakaraMayya: RT @AstroPHYPapers: The hidden giant: discovery of an enormous Galactic dwarf satellite in Gaia DR2. https://t.co/TW2SBSbhUn real_vrocha: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th… None. None. Other stats Sample Sizes : None. Authors: 11 Total Words: 18930 Unqiue Words: 5712 #2. The Spur and the Gap in GD-1: Dynamical evidence for a dark substructure in the Milky Way halo Ana Bonaca, David W. Hogg, Adrian M. Price-Whelan, Charlie Conroy We present a model for the interaction of the GD-1 stellar stream with a massive perturber that naturally explains many of the observed stream features, including a gap and an off-stream spur of stars. The model involves an impulse by a fast encounter, after which the stream grows a loop of stars at different orbital energies. At specific viewing angles, this loop appears offset from the stream track. The configuration-space observations are sensitive to the mass, age, impact parameter, and total velocity of the encounter, and future velocity observations will constrain the full velocity vector of the perturber. A quantitative comparison of the spur and gap features prefers models where the perturber is in the mass range of $10^6\,\rm M_\odot$ to $10^8\,\rm M_\odot$. Orbit integrations back in time show that the stream encounter could not have been caused by any known globular cluster or dwarf galaxy, and mass, size and impact-parameter arguments show that it could not have been caused by a molecular cloud in the Milky Way disk.... more | pdf | html Tweets adamspacemann: This is exciting: astronomers think that a big glob of dark matter could be what disrupted stellar stream GD-1 around the Milky Way https://t.co/Mpdsqy755u adrianprw: On the #arxiv today: evidence for a dark substructure in the Milky Way halo from the morphology of the GD-1 stream! https://t.co/hyTk9gKW8Y (led by @anabonaca w/ @davidwhogg, Charlie Conroy) and see https://t.co/lixElXikTf -- featuring @ESAGaia DR2 data! Jos_de_Bruijne: "The Spur and the Gap in GD-1: Dynamical evidence for a dark substructure in the #MilkyWay halo" https://t.co/w7AyuVzfYZ"We present a model for the interaction of the GD-1 stream with a perturber that naturally explains many of the observed stream features" #GaiaMission #GaiaDR2 https://t.co/FYgoSFf9iR SaschaCaron: Hint for Dark Matter substructure from stellar streams ?https://t.co/ukobnoMVmQ anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and privilege to work with @davidwhogg, @adrianprw, Charlie Conroy, and the @ESAGaia data. AstroPHYPapers: The Spur and the Gap in GD-1: Dynamical evidence for a dark substructure in the Milky Way halo. https://t.co/D7D8uhctXU scimichael: The Spur and the Gap in GD-1: Dynamical evidence for a dark substructure in the Milky Way halo https://t.co/Wa5dngzFQD vancalmthout: RT @SaschaCaron: Hint for Dark Matter substructure from stellar streams ?https://t.co/ukobnoMVmQ adrianprw: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri… ReadDark: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri… johngizis: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri… nbody6: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri… Jos_de_Bruijne: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri… Motigomeman: RT @AstroPHYPapers: The Spur and the Gap in GD-1: Dynamical evidence for a dark substructure in the Milky Way halo. https://t.co/D7D8uhctXU deniserkal: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri… Katelinsaurus: RT @AstroPHYPapers: The Spur and the Gap in GD-1: Dynamical evidence for a dark substructure in the Milky Way halo. https://t.co/D7D8uhctXU isalsalism: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri… garavito_nico: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri… JeffCarlinastro: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri… gorankab: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri… None. None. Other stats Sample Sizes : None. Authors: 4 Total Words: 10347 Unqiue Words: 2747 #3. Contrast sensitivities in the Gaia Data Release 2 Alexis Brandeker, Gianni Cataldi The source detection sensitivity of Gaia is reduced near sources. To characterise this contrast sensitivity is important for understanding the completeness of the Gaia data products, in particular when evaluating source confusion in less well resolved surveys, such as in photometric monitoring for transits. Here, we statistically evaluate the catalog source density to determine the Gaia Data Release 2 source detection sensitivity as a function of angular separation and brightness ratio from a bright source. The contrast sensitivity from 0.4 arcsec out to 12 arcsec ranges in DG = 0-14 mag. We find the derived contrast sensitivity to be robust with respect to target brightness, colour, source density, and Gaia scan coverage. more | pdf | html Tweets Jos_de_Bruijne: "Contrast sensitivities in #GaiaDR2" https://t.co/NOWBumQutj "We statistically evaluate the catalog source density to determine the #GaiaDR2 source detection sensitivity as a function of angular separation and brightness ratio from a bright source" #GaiaMission https://t.co/8DiugeREgI None. None. Other stats Sample Sizes : None. Authors: 2 Total Words: 2699 Unqiue Words: 1040 #4. Spectacular HST observations of the Coma galaxy D100 and star formation in its ram pressure stripped tail William J. Cramer, Jeffrey D. P. Kenney, Ming Sun, Hugh Crowl, Masafumi Yagi, Pavel Jáchym, Elke Roediger, Will Waldron We present new HST F275W, F475W, and F814W imaging of the region of the Coma cluster around D100, a spiral galaxy with a remarkably long and narrow ($60 \times 1.5$ kpc) ram pressure stripped gas tail. We find blue sources coincident with the H$\alpha$ tail, which we identify as young stars formed in the tail. We also determine they are likely to be unbound stellar complexes with sizes of $\sim$ $50-100$ pc, likely to disperse as they age. From a comparison of the colors and magnitudes of the young stellar complexes with simple stellar population models, we find ages ranging from $\sim$ $1-50$ Myr, and masses ranging from $10^3$ to $\sim$ $10^5$ M$_{\odot}$. We find the overall rate and efficiency of star formation are low, $\sim$ $6.0 \times \, 10^{-3}$ $M_{\odot}$ yr$^{-1}$ and $\sim$ $6 \, \times$ 10$^{-12}$ yr$^{-1}$ respectively. The total H$\alpha$ flux of the tail would correspond to a star formation rate $7$ times higher, indicating some other mechanism for H$\alpha$ excitation is dominant. From analysis of colors, we... more | pdf | html Tweets emulenews: Spectacular HST observations of the Coma galaxy D100 and star formation in its ram pressure stripped tail https://t.co/WOHwC8iNcH https://t.co/zKjKYAq0mY AstroPHYPapers: Spectacular HST observations of the Coma galaxy D100 and star formation in its ram pressure stripped tail. https://t.co/cMqv1UBWMd None. None. Other stats Sample Sizes : None. Authors: 8 Total Words: 18395 Unqiue Words: 3933 #5. Imprints of local lightcone projection effects on the galaxy bispectrum IV: Second-order vector and tensor contributions Sheean Jolicoeur, Alireza Allahyari, Chris Clarkson, Julien Larena, Obinna Umeh, Roy Maartens The galaxy bispectrum on scales around and above the equality scale receives contributions from relativistic effects. Some of these arise from lightcone deformation effects, which come from local and line-of-sight integrated contributions. Here we calculate the local contributions from the generated vector and tensor background which is formed as scalar modes couple and enter the horizon. We show that these modes are sub-dominant when compared with other relativistic contributions. more | pdf | html None. Tweets RelativityPaper: Imprints of local lightcone projection effects on the galaxy bispectrum IV: Second-order vector and tensor contributions. https://t.co/r0cYjFrf0b None. None. Other stats Sample Sizes : None. Authors: 6 Total Words: 6305 Unqiue Words: 1671 #6. How to measure galaxy star formation histories II: Nonparametric models Joel Leja, Adam C. Carnall, Benjamin D. Johnson, Charlie Conroy, Joshua S. Speagle Nonparametric star formation histories (SFHs) have long promised to be the "gold standard" for galaxy SED modeling as they are flexible enough to describe the full diversity of SFH shapes, whereas parametric models rule out a significant fraction of these shapes {\it a priori}. However, this flexibility isn't fully constrained even with high-quality observations, making it critical to choose a well-motivated prior. Here we use the SED-fitting code Prospector to explore the effect of different nonparametric priors by fitting SFHs to mock UV-IR photometry generated from a diverse set of input SFHs. First, we confirm that nonparametric SFHs recover input SFHs with less bias and return more accurate errors than parametric SFHs. We further find that while nonparametric SFHs robustly recover the overall shape of the input SFH, the primary determinant of the size and shape of the posterior SFR(t) is the choice of prior rather than the photometric noise. As a practical demonstration, we fit the UV-IR photometry of $\sim$6000 galaxies from... more | pdf | html Tweets AstroPHYPapers: How to measure galaxy star formation histories II: Nonparametric models. https://t.co/INWR7ptgst DivakaraMayya: RT @AstroPHYPapers: How to measure galaxy star formation histories II: Nonparametric models. https://t.co/INWR7ptgst Github Dynamic Nested Sampling package for computing Bayesian posteriors and evidences Repository: dynesty User: joshspeagle Language: Python Stargazers: 34 Subscribers: 14 Forks: 11 Open Issues: 8 None. Other stats Sample Sizes : [25, 2, 100] Authors: 5 Total Words: 13032 Unqiue Words: 2906 #7. Investigating the noise residuals around the gravitational wave event GW150914 Alex B. Nielsen, Alexander H. Nitz, Collin D. Capano, Duncan A. Brown We use the Pearson cross-correlation statistic proposed by Liu and Jackson \cite{Liu:2016kib}, and employed by Creswell et al. \cite{Creswell:2017rbh}, to look for statistically significant correlations between the LIGO Hanford and Livingston detectors at the time of the binary black hole merger GW150914. We compute this statistic for the calibrated strain data released by LIGO, using both the residuals provided by LIGO and using our own subtraction of a maximum-likelihood waveform that is constructed to model binary black hole mergers in general relativity. To assign a significance to the values obtained, we calculate the cross-correlation of both simulated Gaussian noise and data from the LIGO detectors at times during which no detection of gravitational waves has been claimed. We find that after subtracting the maximum likelihood waveform there are no statistically significant correlations between the residuals of the two detectors at the time of GW150914. more | pdf | html Tweets mpi_grav: You can also use @LIGO Open data to investigate possible excess correlations around #GW150914. @mpi_grav researchers did that together with colleagues in this new paper https://t.co/npIbAiD2XA. Have a look at https://t.co/8CeWYUKjjG, if you want to reproduce their results. https://t.co/uYT3eV8DiQ AstroPHYPapers: Investigating the noise residuals around the gravitational wave event GW150914. https://t.co/iw7MDpVZC3 joseru: https://t.co/0e4iyqbLlb emulenews: RT @AstroPHYPapers: Investigating the noise residuals around the gravitational wave event GW150914. https://t.co/iw7MDpVZC3 mwgc1995: RT @AstroPHYPapers: Investigating the noise residuals around the gravitational wave event GW150914. https://t.co/iw7MDpVZC3 Marianasasha: RT @AstroPHYPapers: Investigating the noise residuals around the gravitational wave event GW150914. https://t.co/iw7MDpVZC3 Github Investigating the noise residuals around the gravitational wave event GW150914 Repository: gw150914_investigation User: gwastro Language: Jupyter Notebook Stargazers: 3 Subscribers: 7 Forks: 0 Open Issues: 0 None. Other stats Sample Sizes : None. Authors: 4 Total Words: 4866 Unqiue Words: 1413 #8. Probing the Inner Disk Emission of the Herbig Ae Stars HD 163296 and HD 190073 Benjamin R. Setterholm, John D. Monnier, Claire L. Davies, Alexander Kreplin, Stefan Kraus, Fabien Baron, Alicia Aarnio, Jean-Philippe Berger, Nuria Calvet, Michel Curé, Samer Kanaan, Brian Kloppenborg, Jean-Baptiste Le Bouquin, Rafael Millan-Gabet, Adam E. Rubinstein, Michael L. Sitko, Judit Sturmann, Theo A. ten Brummelaar, Yamina Touhami The physical processes occurring within the inner few astronomical units of proto-planetary disks surrounding Herbig Ae stars are crucial to setting the environment in which the outer planet-forming disk evolves and put critical constraints on the processes of accretion and planet migration. We present the most complete published sample of high angular resolution H- and K-band observations of the stars HD 163296 and HD 190073, including 30 previously unpublished nights of observations of the former and 45 nights of the latter with the CHARA long-baseline interferometer, in addition to archival VLTI data. We confirm previous observations suggesting significant near-infrared emission originates within the putative dust evaporation front of HD 163296 and show this is the case for HD 190073 as well. The H- and K-band sizes are the same within $(3 \pm 3)\%$ for HD 163296 and within $(6 \pm 10)\%$ for HD 190073. The radial surface brightness profiles for both disks are remarkably Gaussian-like with little or no sign of the sharp edge... more | pdf | html Tweets AliciaAarnio: Also authors: @AstroMonnier, @astrokraus, @bkloppenborg, and @TenTheoten! (Ben doesn't have twitter?) https://t.co/JeqJPh847x Probing the Inner Disk Emission of the Herbig Ae Stars HD 163296 and HD 190073 https://t.co/Ec5aN3HQsn AstroPHYPapers: Probing the Inner Disk Emission of the Herbig Ae Stars HD 163296 and HD 190073. https://t.co/l1iNMm1n50 Github Python module for OIFITS format Repository: oifits User: pboley Language: Python Stargazers: 1 Subscribers: 2 Forks: 0 Open Issues: 0 None. Other stats Sample Sizes : None. Authors: 19 Total Words: 10517 Unqiue Words: 2960 #9. Black hole growth through hierarchical black hole mergers in dense star clusters: implications for gravitational wave detections Fabio Antonini, Mark Gieles, Alessia Gualandris In a star cluster with a sufficiently large escape velocity, black holes (BHs) that are produced by BH mergers can be retained, dynamically form new {BH} binaries, and merge again. This process can repeat several times and lead to significant mass growth. In this paper, we calculate the mass of the largest BH that can be formed through repeated mergers of stellar seed BHs and determine how its value depends on the physical properties of the host cluster. We adopt an analytical model in which the energy generated by the black hole binaries in the cluster core is assumed to be regulated by the process of two-body relaxation in the bulk of the system. This principle is used to compute the hardening rate of the binaries and to relate this to the time-dependent global properties of the parent cluster. We demonstrate that in clusters with initial escape velocity $\gtrsim 300\rm km\ s^{-1}$ in the core and density $\gtrsim 10^5\ M_\odot\rm pc^{-3}$, repeated mergers lead to the formation of BHs in the mass range $100-10^5 \,M_\odot$,... more | pdf | html None. Tweets arxiv_org: Black hole growth through hierarchical black hole mergers in dense star clusters: implica... https://t.co/KIsHJb5MoP https://t.co/8tj6Pf4G8K RelativityPaper: Black hole growth through hierarchical black hole mergers in dense star clusters: implications for gravitational wave detections. https://t.co/TLnTeA1v1A avila71201798: RT @arxiv_org: Black hole growth through hierarchical black hole mergers in dense star clusters: implica... https://t.co/KIsHJb5MoP https:/… None. None. Other stats Sample Sizes : None. Authors: 3 Total Words: 11067 Unqiue Words: 2592 #10. An Adaptive Optics Survey of Stellar Variability at the Galactic Center Abhimat Krishna Gautam., Tuan Do, Andrea M. Ghez, Mark R. Morris, Gregory D. Martinez, Matthew W. Hosek Jr., Jessica R. Lu, Shoko Sakai, Gunther Witzel, Siyao Jia, Eric E. Becklin, Keith Matthews We present a $\approx 11.5$ year adaptive optics (AO) study of stellar variability and search for eclipsing binaries in the central $\sim 0.4$ pc ($\sim 10''$) of the Milky Way nuclear star cluster. We measure the photometry of 563 stars using the Keck II NIRC2 imager ($K'$-band, $\lambda_0 = 2.124 \text{ } \mu \text{m}$). We achieve a photometric uncertainty floor of $\Delta m_{K'} \sim 0.03$ ($\approx 3\%$), comparable to the highest precision achieved in other AO studies. Approximately half of our sample ($50 \pm 2 \%$) shows variability. $52 \pm 5\%$ of known early-type young stars and $43 \pm 4 \%$ of known late-type giants are variable. These variability fractions are higher than those of other young, massive star populations or late-type giants in globular clusters, and can be largely explained by two factors. First, our experiment time baseline is sensitive to long-term intrinsic stellar variability. Second, the proper motion of stars behind spatial inhomogeneities in the foreground extinction screen can lead to... more | pdf | html Tweets AstroPHYPapers: An Adaptive Optics Survey of Stellar Variability at the Galactic Center. https://t.co/oRrSdQGiMg None. None. Other stats Sample Sizes : None. Authors: 12 Total Words: 29630 Unqiue Words: 6600 Assert is a website where the best academic papers on arXiv (computer science, math, physics), bioRxiv (biology), BITSS (reproducibility), EarthArXiv (earth science), engrXiv (engineering), LawArXiv (law), PsyArXiv (psychology), SocArXiv (social science), and SportRxiv (sport research) bubble to the top each day. Papers are scored (in real-time) based on how verifiable they are (as determined by their Github repos) and how interesting they are (based on Twitter). To see top papers, follow us on twitter @assertpub_ (arXiv), @assert_pub (bioRxiv), and @assertpub_dev (everything else). To see beautiful figures extracted from papers, follow us on Instagram. Tracking 56,474 papers. Search Sort results based on if they are interesting or reproducible. Interesting Reproducible Online Stats Tracking 56,474 papers.
2018-11-14 11:10:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.66976398229599, "perplexity": 10828.291020005583}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741979.10/warc/CC-MAIN-20181114104603-20181114130603-00211.warc.gz"}
https://tex.stackexchange.com/questions/306685/suppressing-in-the-word-chapter-n-from-header-documentclass-report
# Suppressing in the word “Chapter N” from header, documentclass: report I want to suppress the word "Chapter n" from my header. I want to have only the chapter title in the header e.g. "Introduction" not as "Chapter 1. Introduction" The document class is report. Here is the full code of my document; \documentclass[a4paper,12pt,twoside]{report} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage[nottoc]{tocbibind} \usepackage[nottoc]{tocbibind} \usepackage{titlesec} \titleformat{\chapter}{\normalfont\huge\bf}{\thechapter}{20pt}{\huge\bf} \renewcommand{\baselinestretch}{1.5} \usepackage{graphicx} \usepackage{caption} \usepackage{amsmath} \usepackage{ccaption} \usepackage{subcaption} \graphicspath{{images/}} \usepackage[a4paper, width=150mm, top=25mm, bottom=45mm, bindingoffset=6mm]{geometry} \usepackage{fancyhdr} \pagestyle{fancy} \renewcommand{\chaptername}{} \fancyfoot{} \setcounter{secnumdepth}{4} \pagenumbering{Roman} \usepackage{cite} • if i understand the question correctly, all you want to do is omit the "chapter n" from above the chapter title. if that is true, then you can just use \chapter*{...} (the asterisk * gives the instruction to omit the number from the heading). with the report class, this also omits the entry for the chapter from the table of contents, but that can be added back explicitly with \addcontentsline. (search for that in other questions.) – barbara beeton Apr 27 '16 at 21:18 If you want to keep the chapter name in the header line in all-caps, insert the instruction \def\chaptermark#1{\markboth{\MakeUppercase{#1}}{}} in the preamble. If you do not want the chapter name rendered in all-caps, use the instruction \def\chaptermark#1{\markboth{#1}{}} • Note that the proposed solution method is specific to the report document class. Other solutions may be needed if one uses other document classes. – Mico Apr 27 '16 at 15:12 If I understand well what you want, it's easy to do with titleps, a companion to titlesec: \documentclass[a4paper,12pt,twoside]{report} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage[nottoc]{tocbibind} \usepackage[pagestyles]{titlesec} \titleformat{\chapter}{\normalfont\huge\bfseries}{\thechapter}{20pt}{} \renewcommand{\baselinestretch}{1.5} \usepackage{graphicx} \usepackage{caption} \usepackage{amsmath} \usepackage{ccaption} \usepackage{subcaption} \graphicspath{{images/}} \usepackage[a4paper, width=150mm, top=25mm, bottom=45mm, bindingoffset=6mm]{geometry} \setcounter{secnumdepth}{4} \pagenumbering{Roman} \usepackage{cite} \usepackage{lipsum} \newpagestyle{ownstyle}{% \setfoot{}{}{}
2020-02-22 05:35:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7697820663452148, "perplexity": 1862.3960569202104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145648.56/warc/CC-MAIN-20200222023815-20200222053815-00010.warc.gz"}
https://www.physicsforums.com/threads/where-could-i-find-a-high-amp-constant-current-power-supply.739103/
# Where could I find a high-amp constant current power supply? 1. Feb 18, 2014 ### Xtensity All of the ones I've seen online can only do about 30 amps tops... then there are welding machines, which I am not sure if those can used for non-welding applications without modification of the machine. I am not looking for something that is variable, just something that can output a high current(30-1000) while adjusting the voltage accordingly. I found some supplied by various companies but they do not have prices listed and are likely very expensive - such ones the voltage and current can be adjusted, which I do not have a particular need for that feature. 2. Feb 19, 2014 ### meBigGuy What voltage range? I understand you want a current source, but what is the highest voltage it will need to be able to produce, and at what current. Or, asked another way, at 1000 AMPs, what do you expect the output voltage to be? There is a big difference between 0.1V and 24V (100 watts vs 24KW) 3. Feb 19, 2014 ### Xtensity The max voltage I would need would be 2, maybe 2.5 volts tops. 4. Feb 19, 2014 ### meBigGuy Last edited: Feb 19, 2014 5. Feb 19, 2014 I've seen many of these.... trying to stay under $1,000. I do not need the system to be variable, which I think is what really hikes the price up, but rather I need a constant and steady high current output. For instance 200 amp welding power supplies are a few hundred bucks... but a 200 amp variable switching power supply might be a thousand or so. The former option is clearly available in the form of a welding power supply, but my question is are there other forms of CC power supplies with higher current outputs? 6. Feb 19, 2014 ### Baluncore Firstly, what is your application? Do you only need to limit the peak current? How constant must the current be? Bandwidth, how quickly must the regulator respond? Gradient field amplifiers used in MRI would be ideal. There are several lower cost solutions. For a low voltage output it is most efficient to use a rotary converter such as a DC generator driven by an AC motor. Current regulation can be achieved through feedback to the field. The next most efficient solution would be a switching converter using synchronous MOSFETs instead of output diodes. A welder is designed to produce between 20 and 40 volts when operating. You could reduce the number of turns on the secondary of the welding transformer to say three turns of copper strap. That would reduce the supply current needed while increasing available current and efficiency. Some older welders have a movable magnetic shunt that is used to regulate current. That might be used to limit the current. I use such a system to fast charge truck batteries through a 500 amp bridge rectifier. 7. Feb 19, 2014 ### Xtensity I intend to electroplate very large items as well as other related applications. I just need the current to be a high number, in the range of 600-1000, steady. CC - Constant current. How Constant? I was under the impression most CC power supplies can keep the current decently constant by adjusting the voltage to compensate for resistance changes. Bandwidth... for my applications this probably isn't too important. If the answer "not slow" tells you anything. I am not a power supply expert - just for the record.. so I do not know much about this. I am not looking to modify any existing power supplies to a great extent. This isn't really my field of expertise. Is there no such thing as a CC(constant current) high amperage power supply, say maybe around 2kW, with amperage 600+? 8. Feb 19, 2014 ### Baluncore I would suggest you experiment with a low cost caddy welder. It will cost about US$350. If it does not work then you can return it under warranty, or sell it on the second hand market. Select based on the current available at continuous operation = 100% duty cycle. Set to DC. Turn off any pulse mode. Wind the voltage right down. Wind the current up. 9. Feb 20, 2014 ### Xtensity I will look into it. On such a device, with constant current mode, the voltage auto adjusts, correct? I was somewhat thrown off by when you said to set both the voltage and the current. 10. Feb 20, 2014 ### jim hardy 11. Feb 20, 2014 ### Baluncore With a caddy welder, current is limited but so is the droop voltage to maintain the arc. They are designed to primarily regulate current. It is able to produce more voltage than you need. It is just a constant current supply with a voltage limiter. I would use a second hand choke welder, heavy but cheap, with some secondary winding turns removed, half the windings, twice the current. The industrial bridge rectifier could be from a DC adapter, (ex MIG or TIG). 12. Feb 20, 2014
2017-08-21 12:56:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.292957067489624, "perplexity": 1999.1576977469313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108268.39/warc/CC-MAIN-20170821114342-20170821134342-00244.warc.gz"}
https://bioinformatics.stackexchange.com/questions/4139/visualisation-of-genome-alignment/4153
# visualisation of genome alignment I was asked to check the synteny of some genes in two genome assemblies of the same species (PacBio & Illumina). I was given two scaffolds couples (so 4 scaffolds total) in the Illumina genome and asked if they were assembled together in the PacBio assembly. So, I proceeded with a DNA-DNA alignment with minimap2 of the two Illumina scaffolds on the PacBio whole genome. But I am not sure on how to directly use the sam file for the question I was asked. So I tried to visualize this alignment using Mauve. It worked for the first couple, but it failed (blank page) for the second couple of scaffolds. Surely, Mauve is not the best option out, so can you suggest an alternative? This is a chordata genome. ## 3 Answers After a bit of digging, finally found exactly what I was looking for. I'll share here in case someone else was looking for the same thing. I used a standard Nucleotide BLAST (blastn), coupled with MUMmer. Short tutorial. Assuming that xn & yn are the couple of scaffolds I want to check if they are assembled together in the Pacbio assembly (a multi-fasta file): 1) first, I blasted the pair of Illumina scaffolds (as separate queries) to the PacBio genome (as subject): blastn -query ../illumina_asm/scaffold_x1.fa -subject ../pacbio_asm/genome.fa > out_scaffold_x1; blastn -query ../illumina_asm/scaffold_y1.fa -subject ../pacbio_asm/genome.fa > out_scaffold_y1 The candidates PacBio scaffolds shall appear in the top blastn hits in both out_scaffold_x1 & out_scaffold_y1. Let's assume we found one common scaffold only - corresponding to the fasta entry pacbio_scaffold in the PacBio genome. 2) extract the sequence of the target scaffold from the PacBio assembly, using samtools faidx: samtools faidx ../pacbio_asm/genome.fa; samtools faidx ../pacbio_asm/genome.fa 'pacbio_scaffold' > pacbio_scaffold.fa 3) concatenate the Illumina scaffolds to analyze: cat ../illumina_asm/scaffold_x1.fa ../illumina_asm/scaffold_y1.fa > illumina_scaffolds.fa 4) compare the Illumina scaffolds to the PacBio assembly using nucmer from MUMmer: nucmer --prefix=pacbio_scaffold pacbio_scaffold.fa illumina_scaffolds.fa this will generate a pacbio_scaffold.delta file, used in the next step 5) filter the delta file to keep only the most meaningful alignments file using delta-filter from MUMmer (avoiding confusing messy plots later): delta-filter -m pacbio_scaffold.delta > pacbio_scaffold.delta.m 6) use mummerplot from MUMmer to achieve a graph: mummerplot --png pacbio_scaffold.delta.m -R pacbio_scaffold.fa -Q illumina_scaffolds.fa --prefix=pacbio_scaffold -large -layout On the x-axis, the PacBio scaffold they have in common. On the y-axis, the two Illumina scaffolds (separated by the horizontal grey abline). Colors correspond to the orientation. Example of Illumina scaffolds linked in the PacBio assembly: • If you prefer dotter plot: peerj.com/preprints/26567. There is also minimap2 -DP ref.fa query.fa|miniasm/minidot - > dot.eps, though the visual is not as good as mummerplot. Apr 26 '18 at 16:36 • Also: (1) MashMap - repo has a dotplot script: github.com/marbl/MashMap & (2) Assemblytics - requiring a nucmer delta file: assemblytics.com Apr 27 '18 at 10:20 I think you could use circos for this. The configuration is kind of tedious but it gives nice results. You would basically need to retrieve alignments coordinates and parse them into this format. scaf1 19720 31917 scaf2 28307 34227 scaf1 28307 34227 scaf2 19720 31917 Example figure: • nice thanks! I has also been advised to try MUMmer (mummer.sourceforge.net), for easy dotplots Apr 25 '18 at 21:56 I remember a few options for pairwise genome visualisation (which may be outdated BTW): • Thanks Léo! Maybe also worth of notice: (1) Geopard for GUI-dotplot: cube.univie.ac.at/gepard; (2) Ribbon - and actually (3) Mauve could also be run from command-line: sourceforge.net/p/ngopt/wiki/… Enough material for next bioinfo-DEE meeting I guess ;) Apr 27 '18 at 6:58
2021-09-18 01:41:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6229972839355469, "perplexity": 11018.606170174515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056120.36/warc/CC-MAIN-20210918002951-20210918032951-00473.warc.gz"}
https://www.rdocumentation.org/packages/PBSmapping/versions/2.72.1
# PBSmapping v2.72.1 0 0th Percentile ## Mapping Fisheries Data and Spatial Analysis Tools This software has evolved from fisheries research conducted at the Pacific Biological Station (PBS) in 'Nanaimo', British Columbia, Canada. It extends the R language to include two-dimensional plotting features similar to those commonly available in a Geographic Information System (GIS). Embedded C code speeds algorithms from computational geometry, such as finding polygons that contain specified point events or converting between longitude-latitude and Universal Transverse Mercator (UTM) coordinates. Additionally, we include 'C++' code developed by Angus Johnson for the 'Clipper' library, data for a global shoreline, and other data sets in the public domain. Under the user's R library directory '.libPaths()', specifically in './PBSmapping/doc', a complete user's guide is offered and should be consulted to use package functions effectively.
2020-10-27 12:46:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33816832304000854, "perplexity": 6336.432307167389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894175.55/warc/CC-MAIN-20201027111346-20201027141346-00084.warc.gz"}
https://www.physicsforums.com/threads/tensor-from-potential-function.781569/
# Tensor from Potential Function 1. Nov 12, 2014 ### KleZMeR 1. The problem statement, all variables and given/known data I am looking at Goldstein, Classical Mechanics. I am on page 254, and trying to reference page 190 for my confusion. I don't understand how they got from equation 6.49 to 6.50, potential energy function to tensor matrix. I really want to know how to calculate a tensor from a function of this type (any type), but somehow the Goldstein text is not clear to me. 2. Relevant equations $V = \frac{k}{2} (\eta_{1}^2+2\eta_{2}^2 +\eta_{3}^2-2\eta_{1}\eta_{2}-2\eta_{2}\eta_{3})$ \begin{array}{ccc} k & -k & 0 \\ -k & 2k & -k \\ 0 & -k & k \end{array} 3. The attempt at a solution The solution is given. I think this is done by means of equation 5.14, but again, I am not too clear on this. 2. Nov 12, 2014 ### ShayanJ $\mathcal V=\frac 1 2 \vec \eta^T V \vec\eta=\frac 1 2 (\eta_1 \ \ \ \eta_2 \ \ \ \eta_3) \left(\begin{array}{ccc} k \ \ \ \ -k \ \ \ \ 0 \\ -k \ \ \ \ 2k \ \ \ \ -k \\ 0 \ \ \ \ -k \ \ \ \ k \end{array} \right)\ \left( \begin{array}{c} \eta_1 \\ \eta_2 \\ \eta_3 \end{array} \right)$ Last edited: Nov 12, 2014 3. Nov 12, 2014 ### KleZMeR Thanks Shyan, but how do I decompose the potential function to arrive at this? Or, rather, how do I represent my function in Einstein's summation notation? I believe from what you are showing that my potential function itself can be written as a matrix and be decomposed by two multiplications using $\eta^T , \eta$? 4. Nov 13, 2014 ### ShayanJ The potential function is a scalar so you can't write it as a matrix. And the thing I wrote, that's the simplest way of getting a scalar from a vector and a tensor. So people consider this and define the potential tensor which may be useful in some ways. In component notation and using Einstein summation convention, its written as: $\mathcal V=\frac 1 2 \eta_i V^i_j\eta^j$ But the potential function itself, is just $\mathcal V$ in component notation because its a scalar and has only one component! 5. Nov 20, 2014 ### KleZMeR Thank you!! That did help a LOT. Somehow I keep resorting back to the Goldstein book because it is the same notation we use in lecture and tests, but it does lack some wording in my opinion. I guess the explanation you gave would be better found in a math-methods book.
2018-01-21 11:27:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9627363085746765, "perplexity": 440.142259401277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890514.66/warc/CC-MAIN-20180121100252-20180121120252-00319.warc.gz"}
http://mail-archives.apache.org/mod_mbox/flink-commits/201504.mbox/%3C1df4c5a68c854df7bd1bcc3ebc324c30@git.apache.org%3E
##### Site index · List index Message view Top From u..@apache.org Subject [07/30] flink git commit: [docs] Change doc layout Date Wed, 22 Apr 2015 14:17:04 GMT ---------------------------------------------------------------------- diff --git a/docs/libs/ml/als.md b/docs/libs/ml/als.md new file mode 100644 index 0000000..7a4a5d5 --- /dev/null +++ b/docs/libs/ml/als.md @@ -0,0 +1,157 @@ +--- +mathjax: include +title: Alternating Least Squares +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + +Unless required by applicable law or agreed to in writing, +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +--> + +* This will be replaced by the TOC +{:toc} + +## Description + +The alternating least squares (ALS) algorithm factorizes a given matrix $R$ into two factors $U$ and $V$ such that $R \approx U^TV$. +The unknown row dimension is given as a parameter to the algorithm and is called latent factors. +Since matrix factorization can be used in the context of recommendation, the matrices $U$ and $V$ can be called user and item matrix, respectively. +The $i$th column of the user matrix is denoted by $u_i$ and the $i$th column of the item matrix is $v_i$. +The matrix $R$ can be called the ratings matrix with $$(R)_{i,j} = r_{i,j}$$. + +In order to find the user and item matrix, the following problem is solved: + +$$\arg\min_{U,V} \sum_{\{i,j\mid r_{i,j} \not= 0\}} \left(r_{i,j} - u_{i}^Tv_{j}\right)^2 + +\lambda \left(\sum_{i} n_{u_i} \left\lVert u_i \right\rVert^2 + \sum_{j} n_{v_j} \left\lVert v_j \right\rVert^2 \right)$$ + +with $\lambda$ being the regularization factor, $$n_{u_i}$$ being the number of items the user $i$ has rated and $$n_{v_j}$$ being the number of times the item $j$ has been rated. +This regularization scheme to avoid overfitting is called weighted-$\lambda$-regularization. +Details can be found in the work of [Zhou et al.](http://dx.doi.org/10.1007/978-3-540-68880-8_32). + +By fixing one of the matrices $U$ or $V$, we obtain a quadratic form which can be solved directly. +The solution of the modified problem is guaranteed to monotonically decrease the overall cost function. +By applying this step alternately to the matrices $U$ and $V$, we can iteratively improve the matrix factorization. + +The matrix $R$ is given in its sparse representation as a tuple of $(i, j, r)$ where $i$ denotes the row index, $j$ the column index and $r$ is the matrix value at position $(i,j)$. + + +## Parameters + +The alternating least squares implementation can be controlled by the following parameters: + + <table class="table table-bordered"> + <tr> + <th class="text-left" style="width: 20%">Parameters</th> + <th class="text-center">Description</th> + </tr> + + <tbody> + <tr> + <td><strong>NumFactors</strong></td> + <td> + <p> + The number of latent factors to use for the underlying model. + It is equivalent to the dimension of the calculated user and item vectors. + (Default value: <strong>10</strong>) + </p> + </td> + </tr> + <tr> + <td><strong>Lambda</strong></td> + <td> + <p> + Regularization factor. Tune this value in order to avoid overfitting or poor performance due to strong generalization. + (Default value: <strong>1</strong>) + </p> + </td> + </tr> + <tr> + <td><strong>Iterations</strong></td> + <td> + <p> + The maximum number of iterations. + (Default value: <strong>10</strong>) + </p> + </td> + </tr> + <tr> + <td><strong>Blocks</strong></td> + <td> + <p> + The number of blocks into which the user and item matrix are grouped. + The fewer blocks one uses, the less data is sent redundantly. + However, bigger blocks entail bigger update messages which have to be stored on the heap. + If the algorithm fails because of an OutOfMemoryException, then try to increase the number of blocks. + (Default value: '''None''') + </p> + </td> + </tr> + <tr> + <td><strong>Seed</strong></td> + <td> + <p> + Random seed used to generate the initial item matrix for the algorithm. + (Default value: <strong>0</strong>) + </p> + </td> + </tr> + <tr> + <td><strong>TemporaryPath</strong></td> + <td> + <p> + Path to a temporary directory into which intermediate results are stored. + If this value is set, then the algorithm is split into two preprocessing steps, the ALS iteration and a post-processing step which calculates a last ALS half-step. + The preprocessing steps calculate the <code>OutBlockInformation</code> and <code>InBlockInformation</code> for the given rating matrix. + The results of the individual steps are stored in the specified directory. + By splitting the algorithm into multiple smaller steps, Flink does not have to split the available memory amongst too many operators. + This allows the system to process bigger individual messages and improves the overall performance. + (Default value: <strong>None</strong>) + </p> + </td> + </tr> + </tbody> + </table> + +## Examples + +{% highlight scala %} +// Read input data set from a csv file +val inputDS: DataSet[(Int, Int, Double)] = env.readCsvFile[(Int, Int, Double)]( + pathToTrainingFile) + +// Setup the ALS learner +val als = ALS() +.setIterations(10) +.setNumFactors(10) +.setBlocks(100) +.setTemporaryPath("hdfs://tempPath") + +// Set the other parameters via a parameter map +val parameters = ParameterMap() + +// Calculate the factorization +val factorization = als.fit(inputDS, parameters) + +// Read the testing data set from a csv file +val testingDS: DataSet[(Int, Int)] = env.readCsvFile[(Int, Int)](pathToData) + +// Calculate the ratings according to the matrix factorization +val predictedRatings = factorization.transform(testingDS) +{% endhighlight %} \ No newline at end of file ---------------------------------------------------------------------- diff --git a/docs/libs/ml/cocoa.md b/docs/libs/ml/cocoa.md new file mode 100644 index 0000000..0bf8d67 --- /dev/null +++ b/docs/libs/ml/cocoa.md @@ -0,0 +1,164 @@ +--- +mathjax: include +title: Communication efficient distributed dual coordinate ascent (CoCoA) +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + +Unless required by applicable law or agreed to in writing, +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +--> + +* This will be replaced by the TOC +{:toc} + +## Description + +Implements the communication-efficient distributed dual coordinate ascent algorithm with hinge-loss function. +The algorithm can be used to train a SVM with soft-margin. +The algorithm solves the following minimization problem: + +$$\min_{\mathbf{w} \in \mathbb{R}^d} \frac{\lambda}{2} \left\lVert \mathbf{w} \right\rVert^2 + \frac{1}{n} \sum_{i=1}^n l_{i}\left(\mathbf{w}^T\mathbf{x}_i\right)$$ + +with $\mathbf{w}$ being the weight vector, $\lambda$ being the regularization constant, +$$\mathbf{x}_i \in \mathbb{R}^d$$ being the data points and $$l_{i}$$ being the convex loss +functions, which can also depend on the labels $$y_{i} \in \mathbb{R}$$. +In the current implementation the regularizer is the $\ell_2$-norm and the loss functions are the hinge-loss functions: + + $$l_{i} = \max\left(0, 1 - y_{i} \mathbf{w}^T\mathbf{x}_i \right)$$ + +With these choices, the problem definition is equivalent to a SVM with soft-margin. +Thus, the algorithm allows us to train a SVM with soft-margin. + +The minimization problem is solved by applying stochastic dual coordinate ascent (SDCA). +In order to make the algorithm efficient in a distributed setting, the CoCoA algorithm calculates +several iterations of SDCA locally on a data block before merging the local updates into a +valid global state. +This state is redistributed to the different data partitions where the next round of local SDCA +iterations is then executed. +The number of outer iterations and local SDCA iterations control the overall network costs, because +there is only network communication required for each outer iteration. +The local SDCA iterations are embarrassingly parallel once the individual data partitions have been +distributed across the cluster. + +The implementation of this algorithm is based on the work of +[Jaggi et al.](http://arxiv.org/abs/1409.1458 here) + +## Parameters + +The CoCoA implementation can be controlled by the following parameters: + + <table class="table table-bordered"> + <tr> + <th class="text-left" style="width: 20%">Parameters</th> + <th class="text-center">Description</th> + </tr> + + <tbody> + <tr> + <td><strong>Blocks</strong></td> + <td> + <p> + Sets the number of blocks into which the input data will be split. + On each block the local stochastic dual coordinate ascent method is executed. + This number should be set at least to the degree of parallelism. + If no value is specified, then the parallelism of the input DataSet is used as the number of blocks. + (Default value: <strong>None</strong>) + </p> + </td> + </tr> + <tr> + <td><strong>Iterations</strong></td> + <td> + <p> + Defines the maximum number of iterations of the outer loop method. + In other words, it defines how often the SDCA method is applied to the blocked data. + After each iteration, the locally computed weight vector updates have to be reduced to update the global weight vector value. + The new weight vector is broadcast to all SDCA tasks at the beginning of each iteration. + (Default value: <strong>10</strong>) + </p> + </td> + </tr> + <tr> + <td><strong>LocalIterations</strong></td> + <td> + <p> + Defines the maximum number of SDCA iterations. + In other words, it defines how many data points are drawn from each local data block to calculate the stochastic dual coordinate ascent. + (Default value: <strong>10</strong>) + </p> + </td> + </tr> + <tr> + <td><strong>Regularization</strong></td> + <td> + <p> + Defines the regularization constant of the CoCoA algorithm. + The higher the value, the smaller will the 2-norm of the weight vector be. + In case of a SVM with hinge loss this means that the SVM margin will be wider even though it might contain some false classifications. + (Default value: <strong>1.0</strong>) + </p> + </td> + </tr> + <tr> + <td><strong>Stepsize</strong></td> + <td> + <p> + Defines the initial step size for the updates of the weight vector. + The larger the step size is, the larger will be the contribution of the weight vector updates to the next weight vector value. + The effective scaling of the updates is $\frac{stepsize}{blocks}$. + This value has to be tuned in case that the algorithm becomes instable. + (Default value: <strong>1.0</strong>) + </p> + </td> + </tr> + <tr> + <td><strong>Seed</strong></td> + <td> + <p> + Defines the seed to initialize the random number generator. + The seed directly controls which data points are chosen for the SDCA method. + (Default value: <strong>0</strong>) + </p> + </td> + </tr> + </tbody> + </table> + +## Examples + +{% highlight scala %} +// Read the training data set + +// Create the CoCoA learner +val cocoa = CoCoA() +.setBlocks(10) +.setIterations(10) +.setLocalIterations(10) +.setRegularization(0.5) +.setStepsize(0.5) + +// Learn the SVM model +val svm = cocoa.fit(trainingDS) + +// Read the testing data set + +// Calculate the predictions for the testing data set +val predictionDS: DataSet[LabeledVector] = model.transform(testingDS) +{% endhighlight %} \ No newline at end of file ---------------------------------------------------------------------- diff --git a/docs/libs/ml/index.md b/docs/libs/ml/index.md new file mode 100644 index 0000000..9753e68 --- /dev/null +++ b/docs/libs/ml/index.md @@ -0,0 +1,39 @@ +--- +title: "Machine Learning Library" +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + +Unless required by applicable law or agreed to in writing, +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +--> + + +{% highlight bash %} +<dependency> + <version>{{site.version }}</version> +</dependency> +{% endhighlight %} + +## Algorithms + +* [Alternating Least Squares (ALS)](als.html) +* [Communication efficient distributed dual coordinate ascent (CoCoA)](cocoa.html) +* [Multiple linear regression](multiple_linear_regression.html) +* [Polynomial Base Feature Mapper](polynomial_base_feature_mapper.html) +* [Standard Scaler](standard_scaler.html) \ No newline at end of file ---------------------------------------------------------------------- diff --git a/docs/libs/ml/multiple_linear_regression.md b/docs/libs/ml/multiple_linear_regression.md new file mode 100644 index 0000000..840e899 --- /dev/null +++ b/docs/libs/ml/multiple_linear_regression.md @@ -0,0 +1,124 @@ +--- +mathjax: include +title: "Multiple linear regression" +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + +Unless required by applicable law or agreed to in writing, +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +--> + +* This will be replaced by the TOC +{:toc} + +## Description + + Multiple linear regression tries to find a linear function which best fits the provided input data. + Given a set of input data with its value $(\mathbf{x}, y)$, the multiple linear regression finds + a vector $\mathbf{w}$ such that the sum of the squared residuals is minimized: + + $$S(\mathbf{w}) = \sum_{i=1} \left(y - \mathbf{w}^T\mathbf{x_i} \right)^2$$ + + Written in matrix notation, we obtain the following formulation: + + $$\mathbf{w}^* = \arg \min_{\mathbf{w}} (\mathbf{y} - X\mathbf{w})^2$$ + + This problem has a closed form solution which is given by: + + $$\mathbf{w}^* = \left(X^TX\right)^{-1}X^T\mathbf{y}$$ + + However, in cases where the input data set is so huge that a complete parse over the whole data + set is prohibitive, one can apply stochastic gradient descent (SGD) to approximate the solution. + The SGD first calculates for a random subset of the input data set the gradients. The gradient + for a given point $\mathbf{x}_i$ is given by: + + $$\nabla_{\mathbf{w}} S(\mathbf{w}, \mathbf{x_i}) = 2\left(\mathbf{w}^T\mathbf{x_i} - + y\right)\mathbf{x_i}$$ + + The gradients are averaged and scaled. The scaling is defined by $\gamma = \frac{s}{\sqrt{j}}$ + with $s$ being the initial step size and $j$ being the current iteration number. The resulting gradient is subtracted from the + current weight vector giving the new weight vector for the next iteration: + + $$\mathbf{w}_{t+1} = \mathbf{w}_t - \gamma \frac{1}{n}\sum_{i=1}^n \nabla_{\mathbf{w}} S(\mathbf{w}, \mathbf{x_i})$$ + + The multiple linear regression algorithm computes either a fixed number of SGD iterations or terminates based on a dynamic convergence criterion. + The convergence criterion is the relative change in the sum of squared residuals: + + $$\frac{S_{k-1} - S_k}{S_{k-1}} < \rho$$ + +## Parameters + + The multiple linear regression implementation can be controlled by the following parameters: + + <table class="table table-bordered"> + <tr> + <th class="text-left" style="width: 20%">Parameters</th> + <th class="text-center">Description</th> + </tr> + + <tbody> + <tr> + <td><strong>Iterations</strong></td> + <td> + <p> + The maximum number of iterations. (Default value: <strong>10</strong>) + </p> + </td> + </tr> + <tr> + <td><strong>Stepsize</strong></td> + <td> + <p> + Initial step size for the gradient descent method. + This value controls how far the gradient descent method moves in the opposite direction of the gradient. + Tuning this parameter might be crucial to make it stable and to obtain a better performance. + (Default value: <strong>0.1</strong>) + </p> + </td> + </tr> + <tr> + <td><strong>ConvergenceThreshold</strong></td> + <td> + <p> + Threshold for relative change of the sum of squared residuals until the iteration is stopped. + (Default value: <strong>None</strong>) + </p> + </td> + </tr> + </tbody> + </table> + +## Examples + +{% highlight scala %} +// Create multiple linear regression learner +val mlr = MultipleLinearRegression() +.setIterations(10) +.setStepsize(0.5) +.setConvergenceThreshold(0.001) + +// Obtain training and testing data set +val trainingDS: DataSet[LabeledVector] = ... +val testingDS: DataSet[Vector] = ... + +// Fit the linear model to the provided data +val model = mlr.fit(trainingDS) + +// Calculate the predictions for the test data +val predictions = model.transform(testingDS) +{% endhighlight %} ---------------------------------------------------------------------- diff --git a/docs/libs/ml/polynomial_base_feature_mapper.md b/docs/libs/ml/polynomial_base_feature_mapper.md new file mode 100644 index 0000000..2964f04 --- /dev/null +++ b/docs/libs/ml/polynomial_base_feature_mapper.md @@ -0,0 +1,91 @@ +--- +mathjax: include +title: Polynomial Base Feature Mapper +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + +Unless required by applicable law or agreed to in writing, +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +--> + +* This will be replaced by the TOC +{:toc} + +## Description + +The polynomial base feature mapper maps a vector into the polynomial feature space of degree $d$. +The dimension of the input vector determines the number of polynomial factors whose values are the respective vector entries. +Given a vector $(x, y, z, \ldots)^T$ the resulting feature vector looks like: + +$$\left(x, y, z, x^2, xy, y^2, yz, z^2, x^3, x^2y, x^2z, xy^2, xyz, xz^2, y^3, \ldots\right)^T$$ + +Flink's implementation orders the polynomials in decreasing order of their degree. + +Given the vector $\left(3,2\right)^T$, the polynomial base feature vector of degree 3 would look like + + $$\left(3^3, 3^2\cdot2, 3\cdot2^2, 2^3, 3^2, 3\cdot2, 2^2, 3, 2\right)^T$$ + +This transformer can be prepended to all Transformer and Learner implementations which expect an input of type LabeledVector. + +## Parameters + +The polynomial base feature mapper can be controlled by the following parameters: + +<table class="table table-bordered"> + <tr> + <th class="text-left" style="width: 20%">Parameters</th> + <th class="text-center">Description</th> + </tr> + + <tbody> + <tr> + <td><strong>Degree</strong></td> + <td> + <p> + The maximum polynomial degree. + (Default value: <strong>10</strong>) + </p> + </td> + </tr> + </tbody> + </table> + +## Examples + +{% highlight scala %} +// Obtain the training data set +val trainingDS: DataSet[LabeledVector] = ... + +// Setup polynomial base feature extractor of degree 3 +val polyBase = PolynomialBase() +.setDegree(3) + +// Setup the multiple linear regression learner +val mlr = MultipleLinearRegression() + +// Control the learner via the parameter map +val parameters = ParameterMap() + +// Create pipeline PolynomialBase -> MultipleLinearRegression +val chained = polyBase.chain(mlr) + +// Learn the model +val model = chained.fit(trainingDS) +{% endhighlight %} \ No newline at end of file ---------------------------------------------------------------------- diff --git a/docs/libs/ml/standard_scaler.md b/docs/libs/ml/standard_scaler.md new file mode 100644 index 0000000..aae4620 --- /dev/null +++ b/docs/libs/ml/standard_scaler.md @@ -0,0 +1,90 @@ +--- +mathjax: include +title: "Standard Scaler" +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + +Unless required by applicable law or agreed to in writing, +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +--> + +* This will be replaced by the TOC +{:toc} + +## Description + + The standard scaler scales the given data set, so that all features will have a user specified mean and variance. + In case the user does not provide a specific mean and standard deviation, the standard scaler transforms the features of the input data set to have mean equal to 0 and standard deviation equal to 1. + Given a set of input data $x_{1}, x_{2},... x_{n}$, with mean: + + $$\bar{x} = \frac{1}{n}\sum_{i=1}^{n}x_{i}$$ + + and standard deviation: + + $$\sigma_{x}=\sqrt{ \frac{1}{n} \sum_{i=1}^{n}(x_{i}-\bar{x})^{2}}$$ + +The scaled data set $z_{1}, z_{2},...,z_{n}$ will be: + + $$z_{i}= std \left (\frac{x_{i} - \bar{x} }{\sigma_{x}}\right ) + mean$$ + +where $\textit{std}$ and $\textit{mean}$ are the user specified values for the standard deviation and mean. + +## Parameters + +The standard scaler implementation can be controlled by the following two parameters: + + <table class="table table-bordered"> + <tr> + <th class="text-left" style="width: 20%">Parameters</th> + <th class="text-center">Description</th> + </tr> + + <tbody> + <tr> + <td><strong>Mean</strong></td> + <td> + <p> + The mean of the scaled data set. (Default value: <strong>0.0</strong>) + </p> + </td> + </tr> + <tr> + <td><strong>Std</strong></td> + <td> + <p> + The standard deviation of the scaled data set. (Default value: <strong>1.0</strong>) + </p> + </td> + </tr> + </tbody> +</table> + +## Examples + +{% highlight scala %} +// Create standard scaler transformer +val scaler = StandardScaler() +.setMean(10.0) +.setStd(2.0) + +// Obtain data set to be scaled +val dataSet: DataSet[Vector] = ... + +// Scale the provided data set to have mean=10.0 and std=2.0 +val scaledDS = scaler.transform(dataSet) +{% endhighlight %} ---------------------------------------------------------------------- diff --git a/docs/libs/spargel_guide.md b/docs/libs/spargel_guide.md new file mode 100644 index 0000000..87a9326 --- /dev/null +++ b/docs/libs/spargel_guide.md @@ -0,0 +1,131 @@ +--- +title: "Spargel Graph Processing API" +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + +Unless required by applicable law or agreed to in writing, +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +--> + +Spargel is our [Giraph](http://giraph.apache.org) like **graph processing** Java API. It supports basic graph computations, which are run as a sequence of [supersteps](iterations.html#supersteps). Spargel and Giraph both implement the [Bulk Synchronous Parallel (BSP)](https://en.wikipedia.org/wiki/Bulk_Synchronous_Parallel) programming model, propsed by Google's [Pregel](http://googleresearch.blogspot.de/2009/06/large-scale-graph-computing-at-google.html). + +The API provides a **vertex-centric** view on graph processing with two basic operations per superstep: + + 1. **Send messages** to other vertices, and + 2. **Receive messages** from other vertices and **update own vertex state**. + +This vertex-centric view makes it easy to express a large class of graph problems efficiently. We will list all *relevant interfaces* of the **Spargel API** to implement and walk through an **example Spargel program**. + +* This will be replaced by the TOC +{:toc} + +Spargel API +----------- + +The Spargel API is part of the *addons* Maven project. All relevant classes are located in the *org.apache.flink.spargel.java* package. + +Add the following dependency to your pom.xml to use the Spargel. + +~~~xml +<dependency> + <version>{{site.version}}</version> +</dependency> +~~~ + +Extend **VertexUpdateFunction&lt;***VertexKeyType*, *VertexValueType*, *MessageType***&gt;** to implement your *custom vertex update logic*. + +Extend **MessagingFunction&lt;***VertexKeyType*, *VertexValueType*, *MessageType*, *EdgeValueType***&gt;** to implement your *custom message logic*. + +Create a **SpargelIteration** operator to include Spargel in your data flow. + +Example: Propagate Minimum Vertex ID in Graph +--------------------------------------------- + +The Spargel operator **SpargelIteration** includes Spargel graph processing into your data flow. As usual, it can be combined with other operators like *map*, *reduce*, *join*, etc. + +~~~java +FileDataSource vertices = new FileDataSource(...); +FileDataSource edges = new FileDataSource(...); + +SpargelIteration iteration = new SpargelIteration(new MinMessager(), new MinNeighborUpdater()); +iteration.setVertexInput(vertices); +iteration.setEdgesInput(edges); +iteration.setNumberOfIterations(maxIterations); + +FileDataSink result = new FileDataSink(...); +result.setInput(iteration.getOutput()); + +new Plan(result); +~~~ + +Besides the **program logic** of vertex updates in *MinNeighborUpdater* and messages in *MinMessager*, you have to specify the **initial vertex** and **edge input**. Every vertex has a **key** and **value**. In each superstep, it **receives messages** from other vertices and updates its state: + + - **Vertex** input: **(id**: *VertexKeyType*, **value**: *VertexValueType***)** + - **Edge** input: **(source**: *VertexKeyType*, **target**: *VertexKeyType*[, **value**: *EdgeValueType*]) + +For our example, we set the vertex ID as both *id and value* (initial minimum) and *leave out the edge values* as we don't need them: + +<p class="text-center"> + <img alt="Spargel Example Input" width="75%" src="fig/spargel_example_input.png" /> +</p> + +In order to **propagate the minimum vertex ID**, we iterate over all received messages (which contain the neighboring IDs) and update our value, if we found a new minimum: + +~~~java +public class MinNeighborUpdater extends VertexUpdateFunction<IntValue, IntValue, IntValue> { + + @Override + public void updateVertex(IntValue id, IntValue currentMin, Iterator<IntValue> messages) { + int min = Integer.MAX_VALUE; + + // iterate over all received messages + while (messages.hasNext()) { + int next = messages.next().getValue(); + min = next < min ? next : min; + } + + // update vertex value, if new minimum + if (min < currentMin.getValue()) { + setNewVertexValue(new IntValue(min)); + } + } +} +~~~ + +The **messages in each superstep** consist of the **current minimum ID** seen by the vertex: + +~~~java +public class MinMessager extends MessagingFunction<IntValue, IntValue, IntValue, NullValue> { + + @Override + public void sendMessages(IntValue id, IntValue currentMin) { + // send current minimum to neighbors + sendMessageToAllNeighbors(currentMin); + } +} +~~~ + +The **API-provided method** sendMessageToAllNeighbors(MessageType) sends the message to all neighboring vertices. It is also possible to address specific vertices with sendMessageTo(VertexKeyType, MessageType). + +If the value of a vertex does not change during a superstep, it will **not send** any messages in the superstep. This allows to do incremental updates to the **hot (changing) parts** of the graph, while leaving **cold (steady) parts** untouched. + +The computation **terminates** after a specified *maximum number of supersteps* **-OR-** the *vertex states stop changing*. + +<p class="text-center"> + <img alt="Spargel Example" width="75%" src="fig/spargel_example.png" /> +</p> ---------------------------------------------------------------------- diff --git a/docs/libs/table.md b/docs/libs/table.md new file mode 100644 index 0000000..bcd2cb1 --- /dev/null +++ b/docs/libs/table.md @@ -0,0 +1,127 @@ +--- +title: "Table API - Relational Queries" +is_beta: true +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + +Unless required by applicable law or agreed to in writing, +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +--> + +**The Table API an experimental feature** + +Flink provides an API that allows specifying operations using SQL-like expressions. Instead of +manipulating DataSet or DataStream you work with Table on which relational operations can +be performed. + +The following dependency must be added to your project when using the Table API: + +{% highlight xml %} +<dependency> + <version>{{site.version }}</version> +</dependency> +{% endhighlight %} + +## Scala Table API + +The Table API can be enabled by importing org.apache.flink.api.scala.table._. This enables +implicit conversions that allow +converting a DataSet or DataStream to a Table. This example shows how a DataSet can +be converted, how relational queries can be specified and how a Table can be +converted back to a DataSet: + +{% highlight scala %} + +case class WC(word: String, count: Int) +val input = env.fromElements(WC("hello", 1), WC("hello", 1), WC("ciao", 1)) +val expr = input.toTable +val result = expr.groupBy('word).select('word, 'count.sum as 'count).toSet[WC] +{% endhighlight %} + +The expression DSL uses Scala symbols to refer to field names and we use code generation to +transform expressions to efficient runtime code. Please note that the conversion to and from +Tables only works when using Scala case classes or Flink POJOs. Please check out +the [programming guide](programming_guide.html) to learn the requirements for a class to be +considered a POJO. + +This is another example that shows how you +can join to Tables: + +{% highlight scala %} +case class MyResult(a: String, d: Int) + +val input1 = env.fromElements(...).toTable('a, 'b) +val input2 = env.fromElements(...).toTable('c, 'd) +val joined = input1.join(input2).where("b = a && d > 42").select("a, d").toSet[MyResult] +{% endhighlight %} + +Notice, how a DataSet can be converted to a Table by using as and specifying new +names for the fields. This can also be used to disambiguate fields before a join operation. Also, +in this example we see that you can also use Strings to specify relational expressions. + +description of the expression syntax. + +## Java Table API + +When using Java, Tables can be converted to and from DataSet and DataStream using TableEnvironment. +This example is equivalent to the above Scala Example: + +{% highlight java %} + +public class WC { + + public WC(String word, int count) { + this.word = word; this.count = count; + } + + public WC() {} // empty constructor to satisfy POJO requirements + + public String word; + public int count; +} + +... + +ExecutionEnvironment env = ExecutionEnvironment.createCollectionsEnvironment(); +TableEnvironment tableEnv = new TableEnvironment(); + +DataSet<WC> input = env.fromElements( + new WC("Hello", 1), + new WC("Ciao", 1), + new WC("Hello", 1)); + +Table table = tableEnv.toTable(input); + +Table filtered = table + .groupBy("word") + .select("word.count as count, word") + .filter("count = 2"); + +DataSet<WC> result = tableEnv.toSet(filtered, WC.class); +{% endhighlight %} + +When using Java, the embedded DSL for specifying expressions cannot be used. Only String expressions +are supported. They support exactly the same feature set as the expression DSL. + +Please refer to the Javadoc for a full list of supported operations and a description of the +expression syntax. + + ---------------------------------------------------------------------- diff --git a/docs/local_execution.md b/docs/local_execution.md deleted file mode 100644 index 8e7ecc4..0000000 --- a/docs/local_execution.md +++ /dev/null @@ -1,123 +0,0 @@ ---- -title: "Local Execution" ---- -<!-- -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - -Unless required by applicable law or agreed to in writing, -"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -KIND, either express or implied. See the License for the -specific language governing permissions and limitations ---> - -## Local Execution - -Flink can run on a single machine, even in a single Java Virtual Machine. This allows users to test and debug Flink programs locally. This section gives an overview of the local execution mechanisms. - -The local environments and executors allow you to run Flink programs in a local Java Virtual Machine, or with within any JVM as part of existing programs. Most examples can be launched locally by simply hitting the "Run" button of your IDE. - - -There are two different kinds of local execution supported in Flink. The LocalExecutionEnvironment is starting the full Flink runtime, including a JobManager and a TaskManager. These include memory management and all the internal algorithms that are executed in the cluster mode. - -The CollectionEnvironment is executing the Flink program on Java collections. This mode will not start the full Flink runtime, so the execution is very low-overhead and lightweight. For example a DataSet.map()-transformation will be executed by applying the map() function to all elements in a Java list. - - -## Debugging - -If you are running Flink programs locally, you can also debug your program like any other Java program. You can either use System.out.println() to write out some internal variables or you can use the debugger. It is possible to set breakpoints within map(), reduce() and all the other methods. -Please also refer to the [debugging section](programming_guide.html#debugging) in the Java API documentation for a guide to testing and local debugging utilities in the Java API. - -## Maven Dependency - -If you are developing your program in a Maven project, you have to add the flink-clients module using this dependency: - -~~~xml -<dependency> -</dependency> -~~~ - -## Local Environment - -The LocalEnvironment is a handle to local execution for Flink programs. Use it to run a program within a local JVM - standalone or embedded in other programs. - -The local environment is instantiated via the method ExecutionEnvironment.createLocalEnvironment(). By default, it will use as many local threads for execution as your machine has CPU cores (hardware contexts). You can alternatively specify the desired parallelism. The local environment can be configured to log to the console using enableLogging()/disableLogging(). - -In most cases, calling ExecutionEnvironment.getExecutionEnvironment() is the even better way to go. That method returns a LocalEnvironment when the program is started locally (outside the command line interface), and it returns a pre-configured environment for cluster execution, when the program is invoked by the [command line interface](cli.html). - -~~~java -public static void main(String[] args) throws Exception { - ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment(); - - - data - .filter(new FilterFunction<String>() { - public boolean filter(String value) { - return value.startsWith("http://"); - } - }) - .writeAsText("file:///path/to/result"); - - JobExecutionResult res = env.execute(); -} -~~~ - -The JobExecutionResult object, which is returned after the execution finished, contains the program runtime and the accumulator results. - -The LocalEnvironment allows also to pass custom configuration values to Flink. - -~~~java -Configuration conf = new Configuration(); -final ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment(conf); -~~~ - -*Note:* The local execution environments do not start any web frontend to monitor the execution. - -## Collection Environment - -The execution on Java Collections using the CollectionEnvironment is a low-overhead approach for executing Flink programs. Typical use-cases for this mode are automated tests, debugging and code re-use. - -Users can use algorithms implemented for batch processing also for cases that are more interactive. A slightly changed variant of a Flink program could be used in a Java Application Server for processing incoming requests. - -**Skeleton for Collection-based execution** - -~~~java -public static void main(String[] args) throws Exception { - // initialize a new Collection-based execution environment - final ExecutionEnvironment env = new CollectionEnvironment(); - - DataSet<User> users = env.fromCollection( /* get elements from a Java Collection */); - - /* Data Set transformations ... */ - - // retrieve the resulting Tuple2 elements into a ArrayList. - Collection<...> result = new ArrayList<...>(); - resultDataSet.output(new LocalCollectionOutputFormat<...>(result)); - - // kick off execution. - env.execute(); - - // Do some work with the resulting ArrayList (=Collection). - for(... t : result) { - System.err.println("Result = "+t); - } -} -~~~ - -The flink-java-examples module contains a full example, called CollectionExecutionExample. - -Please note that the execution of the collection-based Flink programs is only possible on small data, which fits into the JVM heap. The execution on collections is not multi-threaded, only one thread is used. ---------------------------------------------------------------------- diff --git a/docs/local_setup.md b/docs/local_setup.md deleted file mode 100644 index a293586..0000000 --- a/docs/local_setup.md +++ /dev/null @@ -1,137 +0,0 @@ ---- -title: "Local Setup" ---- -<!-- -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - -Unless required by applicable law or agreed to in writing, -"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -KIND, either express or implied. See the License for the -specific language governing permissions and limitations ---> - -* This will be replaced by the TOC -{:toc} - -This documentation is intended to provide instructions on how to run Flink locally on a single machine. - - - -## Requirements - -Flink runs on **Linux**, **Mac OS X** and **Windows**. The only requirement for a local setup is **Java 1.6.x** or higher. The following manual assumes a *UNIX-like environment*, for Windows see [Flink on Windows](#flink-on-windows). - -You can check the correct installation of Java by issuing the following command: - -~~~bash -java -version -~~~ - -The command should output something comparable to the following: - -~~~bash -java version "1.6.0_22" -Java(TM) SE Runtime Environment (build 1.6.0_22-b04) -Java HotSpot(TM) 64-Bit Server VM (build 17.1-b03, mixed mode) -~~~ - -## Configuration - -**For local mode Flink is ready to go out of the box and you don't need to change the default configuration.** - -The out of the box configuration will use your default Java installation. You can manually set the environment variable JAVA_HOME or the configuration key env.java.home in conf/flink-conf.yaml if you want to manually override the Java runtime to use. Consult the [configuration page](config.html) for further details about configuring Flink. - - -**You are now ready to start Flink.** Unpack the downloaded archive and change to the newly created flink directory. There you can start Flink in local mode: - -~~~bash -$tar xzf flink-*.tgz -$ cd flink -$bin/start-local.sh -Starting job manager -~~~ - -You can check that the system is running by checking the log files in the logs directory: - -~~~bash -$ tail log/flink-*-jobmanager-*.log -INFO ... - Initializing memory manager with 409 megabytes of memory -INFO ... - Setting up web info server, using web-root directory ... -INFO ... - Web info server will display information about nephele job-manager on localhost, port 8081. -INFO ... - Starting web info server for JobManager on port 8081 -~~~ - -The JobManager will also start a web frontend on port 8081, which you can check with your browser at http://localhost:8081. - - -If you want to run Flink on Windows you need to download, unpack and configure the Flink archive as mentioned above. After that you can either use the **Windows Batch** file (.bat) or use **Cygwin** to run the Flink Jobmanager. - -### Starting with Windows Batch Files - -To start Flink in local mode from the *Windows Batch*, open the command window, navigate to the bin/ directory of Flink and run start-local.bat. - -Note: The bin folder of your Java Runtime Environment must be included in Window's %PATH% variable. Follow this [guide](http://www.java.com/en/download/help/path.xml) to add Java to the %PATH% variable. - -~~~bash -$cd flink -$ cd bin -$start-local.bat -Starting Flink job manager. Webinterface by default on http://localhost:8081/. -Do not close this batch window. Stop job manager by pressing Ctrl+C. -~~~ - -After that, you need to open a second terminal to run jobs using flink.bat. - -### Starting with Cygwin and Unix Scripts - -With *Cygwin* you need to start the Cygwin Terminal, navigate to your Flink directory and run the start-local.sh script: - -~~~bash -$ cd flink -$bin/start-local.sh -Starting Nephele job manager -~~~ - -### Installing Flink from Git - -If you are installing Flink from the git repository and you are using the Windows git shell, Cygwin can produce a failure similiar to this one: - -~~~bash -c:/flink/bin/start-local.sh: line 30:$'\r': command not found -~~~ - -This error occurs, because git is automatically transforming UNIX line endings to Windows style line endings when running in Windows. The problem is, that Cygwin can only deal with UNIX style line endings. The solution is to adjust the Cygwin settings to deal with the correct line endings by following these three steps: - -1. Start a Cygwin shell. - -2. Determine your home directory by entering - -~~~bash -cd; pwd -~~~ - -It will return a path under the Cygwin root path. - -2. Using NotePad, WordPad or a different text editor open the file .bash_profile in the home directory and append the following: (If the file does not exist you have to create it) - -~~~bash -export SHELLOPTS -set -o igncr -~~~ - -Save the file and open a new bash shell. ---------------------------------------------------------------------- diff --git a/docs/ml/alternating_least_squares.md b/docs/ml/alternating_least_squares.md deleted file mode 100644 index 7a4a5d5..0000000 --- a/docs/ml/alternating_least_squares.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -mathjax: include -title: Alternating Least Squares ---- -<!-- -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - -Unless required by applicable law or agreed to in writing, -"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -KIND, either express or implied. See the License for the -specific language governing permissions and limitations ---> - -* This will be replaced by the TOC -{:toc} - -## Description - -The alternating least squares (ALS) algorithm factorizes a given matrix $R$ into two factors $U$ and $V$ such that $R \approx U^TV$. -The unknown row dimension is given as a parameter to the algorithm and is called latent factors. -Since matrix factorization can be used in the context of recommendation, the matrices $U$ and $V$ can be called user and item matrix, respectively. -The $i$th column of the user matrix is denoted by $u_i$ and the $i$th column of the item matrix is $v_i$. -The matrix $R$ can be called the ratings matrix with $$(R)_{i,j} = r_{i,j}$$. - -In order to find the user and item matrix, the following problem is solved: - -$$\arg\min_{U,V} \sum_{\{i,j\mid r_{i,j} \not= 0\}} \left(r_{i,j} - u_{i}^Tv_{j}\right)^2 + -\lambda \left(\sum_{i} n_{u_i} \left\lVert u_i \right\rVert^2 + \sum_{j} n_{v_j} \left\lVert v_j \right\rVert^2 \right)$$ - -with $\lambda$ being the regularization factor, $$n_{u_i}$$ being the number of items the user $i$ has rated and $$n_{v_j}$$ being the number of times the item $j$ has been rated. -This regularization scheme to avoid overfitting is called weighted-$\lambda$-regularization. -Details can be found in the work of [Zhou et al.](http://dx.doi.org/10.1007/978-3-540-68880-8_32). - -By fixing one of the matrices $U$ or $V$, we obtain a quadratic form which can be solved directly. -The solution of the modified problem is guaranteed to monotonically decrease the overall cost function. -By applying this step alternately to the matrices $U$ and $V$, we can iteratively improve the matrix factorization. - -The matrix $R$ is given in its sparse representation as a tuple of $(i, j, r)$ where $i$ denotes the row index, $j$ the column index and $r$ is the matrix value at position $(i,j)$. - - -## Parameters - -The alternating least squares implementation can be controlled by the following parameters: - - <table class="table table-bordered"> - <tr> - <th class="text-left" style="width: 20%">Parameters</th> - <th class="text-center">Description</th> - </tr> - - <tbody> - <tr> - <td><strong>NumFactors</strong></td> - <td> - <p> - The number of latent factors to use for the underlying model. - It is equivalent to the dimension of the calculated user and item vectors. - (Default value: <strong>10</strong>) - </p> - </td> - </tr> - <tr> - <td><strong>Lambda</strong></td> - <td> - <p> - Regularization factor. Tune this value in order to avoid overfitting or poor performance due to strong generalization. - (Default value: <strong>1</strong>) - </p> - </td> - </tr> - <tr> - <td><strong>Iterations</strong></td> - <td> - <p> - The maximum number of iterations. - (Default value: <strong>10</strong>) - </p> - </td> - </tr> - <tr> - <td><strong>Blocks</strong></td> - <td> - <p> - The number of blocks into which the user and item matrix are grouped. - The fewer blocks one uses, the less data is sent redundantly. - However, bigger blocks entail bigger update messages which have to be stored on the heap. - If the algorithm fails because of an OutOfMemoryException, then try to increase the number of blocks. - (Default value: '''None''') - </p> - </td> - </tr> - <tr> - <td><strong>Seed</strong></td> - <td> - <p> - Random seed used to generate the initial item matrix for the algorithm. - (Default value: <strong>0</strong>) - </p> - </td> - </tr> - <tr> - <td><strong>TemporaryPath</strong></td> - <td> - <p> - Path to a temporary directory into which intermediate results are stored. - If this value is set, then the algorithm is split into two preprocessing steps, the ALS iteration and a post-processing step which calculates a last ALS half-step. - The preprocessing steps calculate the <code>OutBlockInformation</code> and <code>InBlockInformation</code> for the given rating matrix. - The results of the individual steps are stored in the specified directory. - By splitting the algorithm into multiple smaller steps, Flink does not have to split the available memory amongst too many operators. - This allows the system to process bigger individual messages and improves the overall performance. - (Default value: <strong>None</strong>) - </p> - </td> - </tr> - </tbody> - </table> - -## Examples - -{% highlight scala %} -// Read input data set from a csv file -val inputDS: DataSet[(Int, Int, Double)] = env.readCsvFile[(Int, Int, Double)]( - pathToTrainingFile) - -// Setup the ALS learner -val als = ALS() -.setIterations(10) -.setNumFactors(10) -.setBlocks(100) -.setTemporaryPath("hdfs://tempPath") - -// Set the other parameters via a parameter map -val parameters = ParameterMap() - -// Calculate the factorization -val factorization = als.fit(inputDS, parameters) - -// Read the testing data set from a csv file -val testingDS: DataSet[(Int, Int)] = env.readCsvFile[(Int, Int)](pathToData) - -// Calculate the ratings according to the matrix factorization -val predictedRatings = factorization.transform(testingDS) -{% endhighlight %} \ No newline at end of file ---------------------------------------------------------------------- diff --git a/docs/ml/cocoa.md b/docs/ml/cocoa.md deleted file mode 100644 index 0bf8d67..0000000 --- a/docs/ml/cocoa.md +++ /dev/null @@ -1,164 +0,0 @@ ---- -mathjax: include -title: Communication efficient distributed dual coordinate ascent (CoCoA) ---- -<!-- -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - -Unless required by applicable law or agreed to in writing, -"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -KIND, either express or implied. See the License for the -specific language governing permissions and limitations ---> - -* This will be replaced by the TOC -{:toc} - -## Description - -Implements the communication-efficient distributed dual coordinate ascent algorithm with hinge-loss function. -The algorithm can be used to train a SVM with soft-margin. -The algorithm solves the following minimization problem: - -$$\min_{\mathbf{w} \in \mathbb{R}^d} \frac{\lambda}{2} \left\lVert \mathbf{w} \right\rVert^2 + \frac{1}{n} \sum_{i=1}^n l_{i}\left(\mathbf{w}^T\mathbf{x}_i\right)$$ - -with $\mathbf{w}$ being the weight vector, $\lambda$ being the regularization constant, -$$\mathbf{x}_i \in \mathbb{R}^d$$ being the data points and $$l_{i}$$ being the convex loss -functions, which can also depend on the labels $$y_{i} \in \mathbb{R}$$. -In the current implementation the regularizer is the $\ell_2$-norm and the loss functions are the hinge-loss functions: - - $$l_{i} = \max\left(0, 1 - y_{i} \mathbf{w}^T\mathbf{x}_i \right)$$ - -With these choices, the problem definition is equivalent to a SVM with soft-margin. -Thus, the algorithm allows us to train a SVM with soft-margin. - -The minimization problem is solved by applying stochastic dual coordinate ascent (SDCA). -In order to make the algorithm efficient in a distributed setting, the CoCoA algorithm calculates -several iterations of SDCA locally on a data block before merging the local updates into a -valid global state. -This state is redistributed to the different data partitions where the next round of local SDCA -iterations is then executed. -The number of outer iterations and local SDCA iterations control the overall network costs, because -there is only network communication required for each outer iteration. -The local SDCA iterations are embarrassingly parallel once the individual data partitions have been -distributed across the cluster. - -The implementation of this algorithm is based on the work of -[Jaggi et al.](http://arxiv.org/abs/1409.1458 here) - -## Parameters - -The CoCoA implementation can be controlled by the following parameters: - - <table class="table table-bordered"> - <tr> - <th class="text-left" style="width: 20%">Parameters</th> - <th class="text-center">Description</th> - </tr> - - <tbody> - <tr> - <td><strong>Blocks</strong></td> - <td> - <p> - Sets the number of blocks into which the input data will be split. - On each block the local stochastic dual coordinate ascent method is executed. - This number should be set at least to the degree of parallelism. - If no value is specified, then the parallelism of the input DataSet is used as the number of blocks. - (Default value: <strong>None</strong>) - </p> - </td> - </tr> - <tr> - <td><strong>Iterations</strong></td> - <td> - <p> - Defines the maximum number of iterations of the outer loop method. - In other words, it defines how often the SDCA method is applied to the blocked data. - After each iteration, the locally computed weight vector updates have to be reduced to update the global weight vector value. - The new weight vector is broadcast to all SDCA tasks at the beginning of each iteration. - (Default value: <strong>10</strong>) - </p> - </td> - </tr> - <tr> - <td><strong>LocalIterations</strong></td> - <td> - <p> - Defines the maximum number of SDCA iterations. - In other words, it defines how many data points are drawn from each local data block to calculate the stochastic dual coordinate ascent. - (Default value: <strong>10</strong>) - </p> - </td> - </tr> - <tr> - <td><strong>Regularization</strong></td> - <td> - <p> - Defines the regularization constant of the CoCoA algorithm. - The higher the value, the smaller will the 2-norm of the weight vector be. - In case of a SVM with hinge loss this means that the SVM margin will be wider even though it might contain some false classifications. - (Default value: <strong>1.0</strong>) - </p> - </td> - </tr> - <tr> - <td><strong>Stepsize</strong></td> - <td> - <p> - Defines the initial step size for the updates of the weight vector. - The larger the step size is, the larger will be the contribution of the weight vector updates to the next weight vector value. - The effective scaling of the updates is $\frac{stepsize}{blocks}$. - This value has to be tuned in case that the algorithm becomes instable. - (Default value: <strong>1.0</strong>) - </p> - </td> - </tr> - <tr> - <td><strong>Seed</strong></td> - <td> - <p> - Defines the seed to initialize the random number generator. - The seed directly controls which data points are chosen for the SDCA method. - (Default value: <strong>0</strong>) - </p> - </td> - </tr> - </tbody> - </table> - -## Examples - -{% highlight scala %} -// Read the training data set - -// Create the CoCoA learner -val cocoa = CoCoA() -.setBlocks(10) -.setIterations(10) -.setLocalIterations(10) -.setRegularization(0.5) -.setStepsize(0.5) - -// Learn the SVM model -val svm = cocoa.fit(trainingDS) - -// Read the testing data set - -// Calculate the predictions for the testing data set -val predictionDS: DataSet[LabeledVector] = model.transform(testingDS) -{% endhighlight %} \ No newline at end of file ---------------------------------------------------------------------- diff --git a/docs/ml/multiple_linear_regression.md b/docs/ml/multiple_linear_regression.md deleted file mode 100644 index 840e899..0000000 --- a/docs/ml/multiple_linear_regression.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -mathjax: include -title: "Multiple linear regression" ---- -<!-- -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - -Unless required by applicable law or agreed to in writing, -"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -KIND, either express or implied. See the License for the -specific language governing permissions and limitations ---> - -* This will be replaced by the TOC -{:toc} - -## Description - - Multiple linear regression tries to find a linear function which best fits the provided input data. - Given a set of input data with its value $(\mathbf{x}, y)$, the multiple linear regression finds - a vector $\mathbf{w}$ such that the sum of the squared residuals is minimized: - - $$S(\mathbf{w}) = \sum_{i=1} \left(y - \mathbf{w}^T\mathbf{x_i} \right)^2$$ - - Written in matrix notation, we obtain the following formulation: - - $$\mathbf{w}^* = \arg \min_{\mathbf{w}} (\mathbf{y} - X\mathbf{w})^2$$ - - This problem has a closed form solution which is given by: - - $$\mathbf{w}^* = \left(X^TX\right)^{-1}X^T\mathbf{y}$$ - - However, in cases where the input data set is so huge that a complete parse over the whole data - set is prohibitive, one can apply stochastic gradient descent (SGD) to approximate the solution. - The SGD first calculates for a random subset of the input data set the gradients. The gradient - for a given point $\mathbf{x}_i$ is given by: - - $$\nabla_{\mathbf{w}} S(\mathbf{w}, \mathbf{x_i}) = 2\left(\mathbf{w}^T\mathbf{x_i} - - y\right)\mathbf{x_i}$$ - - The gradients are averaged and scaled. The scaling is defined by $\gamma = \frac{s}{\sqrt{j}}$ - with $s$ being the initial step size and $j$ being the current iteration number. The resulting gradient is subtracted from the - current weight vector giving the new weight vector for the next iteration: - - $$\mathbf{w}_{t+1} = \mathbf{w}_t - \gamma \frac{1}{n}\sum_{i=1}^n \nabla_{\mathbf{w}} S(\mathbf{w}, \mathbf{x_i})$$ - - The multiple linear regression algorithm computes either a fixed number of SGD iterations or terminates based on a dynamic convergence criterion. - The convergence criterion is the relative change in the sum of squared residuals: - - $$\frac{S_{k-1} - S_k}{S_{k-1}} < \rho$$ - -## Parameters - - The multiple linear regression implementation can be controlled by the following parameters: - - <table class="table table-bordered"> - <tr> - <th class="text-left" style="width: 20%">Parameters</th> - <th class="text-center">Description</th> - </tr> - - <tbody> - <tr> - <td><strong>Iterations</strong></td> - <td> - <p> - The maximum number of iterations. (Default value: <strong>10</strong>) - </p> - </td> - </tr> - <tr> - <td><strong>Stepsize</strong></td> - <td> - <p> - Initial step size for the gradient descent method. - This value controls how far the gradient descent method moves in the opposite direction of the gradient. - Tuning this parameter might be crucial to make it stable and to obtain a better performance. - (Default value: <strong>0.1</strong>) - </p> - </td> - </tr> - <tr> - <td><strong>ConvergenceThreshold</strong></td> - <td> - <p> - Threshold for relative change of the sum of squared residuals until the iteration is stopped. - (Default value: <strong>None</strong>) - </p> - </td> - </tr> - </tbody> - </table> - -## Examples - -{% highlight scala %} -// Create multiple linear regression learner -val mlr = MultipleLinearRegression() -.setIterations(10) -.setStepsize(0.5) -.setConvergenceThreshold(0.001) - -// Obtain training and testing data set -val trainingDS: DataSet[LabeledVector] = ... -val testingDS: DataSet[Vector] = ... - -// Fit the linear model to the provided data -val model = mlr.fit(trainingDS) - -// Calculate the predictions for the test data -val predictions = model.transform(testingDS) -{% endhighlight %} ---------------------------------------------------------------------- diff --git a/docs/ml/polynomial_base_feature_mapper.md b/docs/ml/polynomial_base_feature_mapper.md deleted file mode 100644 index 2964f04..0000000 --- a/docs/ml/polynomial_base_feature_mapper.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -mathjax: include -title: Polynomial Base Feature Mapper ---- -<!-- -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - -Unless required by applicable law or agreed to in writing, -"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -KIND, either express or implied. See the License for the -specific language governing permissions and limitations ---> - -* This will be replaced by the TOC -{:toc} - -## Description - -The polynomial base feature mapper maps a vector into the polynomial feature space of degree $d$. -The dimension of the input vector determines the number of polynomial factors whose values are the respective vector entries. -Given a vector $(x, y, z, \ldots)^T$ the resulting feature vector looks like: - -$$\left(x, y, z, x^2, xy, y^2, yz, z^2, x^3, x^2y, x^2z, xy^2, xyz, xz^2, y^3, \ldots\right)^T$$ - -Flink's implementation orders the polynomials in decreasing order of their degree. - -Given the vector $\left(3,2\right)^T$, the polynomial base feature vector of degree 3 would look like - - $$\left(3^3, 3^2\cdot2, 3\cdot2^2, 2^3, 3^2, 3\cdot2, 2^2, 3, 2\right)^T$$ - -This transformer can be prepended to all Transformer and Learner implementations which expect an input of type LabeledVector. - -## Parameters - -The polynomial base feature mapper can be controlled by the following parameters: - -<table class="table table-bordered"> - <tr> - <th class="text-left" style="width: 20%">Parameters</th> - <th class="text-center">Description</th> - </tr> - - <tbody> - <tr> - <td><strong>Degree</strong></td> - <td> - <p> - The maximum polynomial degree. - (Default value: <strong>10</strong>) - </p> - </td> - </tr> - </tbody> - </table> - -## Examples - -{% highlight scala %} -// Obtain the training data set -val trainingDS: DataSet[LabeledVector] = ... - -// Setup polynomial base feature extractor of degree 3 -val polyBase = PolynomialBase() -.setDegree(3) - -// Setup the multiple linear regression learner -val mlr = MultipleLinearRegression() - -// Control the learner via the parameter map -val parameters = ParameterMap() - -// Create pipeline PolynomialBase -> MultipleLinearRegression -val chained = polyBase.chain(mlr) - -// Learn the model -val model = chained.fit(trainingDS) -{% endhighlight %} \ No newline at end of file ---------------------------------------------------------------------- diff --git a/docs/ml/standard_scaler.md b/docs/ml/standard_scaler.md deleted file mode 100644 index aae4620..0000000 --- a/docs/ml/standard_scaler.md +++ /dev/null @@ -1,90 +0,0 @@ ---- -mathjax: include -title: "Standard Scaler" ---- -<!-- -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - -Unless required by applicable law or agreed to in writing, -"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -KIND, either express or implied. See the License for the -specific language governing permissions and limitations ---> - -* This will be replaced by the TOC -{:toc} - -## Description - - The standard scaler scales the given data set, so that all features will have a user specified mean and variance. - In case the user does not provide a specific mean and standard deviation, the standard scaler transforms the features of the input data set to have mean equal to 0 and standard deviation equal to 1. - Given a set of input data $x_{1}, x_{2},... x_{n}$, with mean: - - $$\bar{x} = \frac{1}{n}\sum_{i=1}^{n}x_{i}$$ - - and standard deviation: - - $$\sigma_{x}=\sqrt{ \frac{1}{n} \sum_{i=1}^{n}(x_{i}-\bar{x})^{2}}$$ - -The scaled data set $z_{1}, z_{2},...,z_{n}$ will be: - - $$z_{i}= std \left (\frac{x_{i} - \bar{x} }{\sigma_{x}}\right ) + mean$$ - -where $\textit{std}$ and $\textit{mean}$ are the user specified values for the standard deviation and mean. - -## Parameters - -The standard scaler implementation can be controlled by the following two parameters: - - <table class="table table-bordered"> - <tr> - <th class="text-left" style="width: 20%">Parameters</th> - <th class="text-center">Description</th> - </tr> - - <tbody> - <tr> - <td><strong>Mean</strong></td> - <td> - <p> - The mean of the scaled data set. (Default value: <strong>0.0</strong>) - </p> - </td> - </tr> - <tr> - <td><strong>Std</strong></td> - <td> - <p> - The standard deviation of the scaled data set. (Default value: <strong>1.0</strong>) - </p> - </td> - </tr> - </tbody> -</table> - -## Examples - -{% highlight scala %} -// Create standard scaler transformer -val scaler = StandardScaler() -.setMean(10.0) -.setStd(2.0) - -// Obtain data set to be scaled -val dataSet: DataSet[Vector] = ... - -// Scale the provided data set to have mean=10.0 and std=2.0 -val scaledDS = scaler.transform(dataSet) -{% endhighlight %} ---------------------------------------------------------------------- diff --git a/docs/page/css/codetabs.css b/docs/page/css/codetabs.css new file mode 100644 index 0000000..420d559 --- /dev/null +++ b/docs/page/css/codetabs.css @@ -0,0 +1,62 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * + * Unless required by applicable law or agreed to in writing, + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations +**/ + +/** + * Make dropdown menus in nav bars show on hover instead of click + * using solution at http://stackoverflow.com/questions/8878033/how- + **/ + display: block; +} + + content: none; +} + +/** Make the submenus open on hover on the parent menu item */ + display: block; +} + +/** Make the submenus be invisible until the parent menu item is hovered upon */ + display: none; +} + +/** + * Made the navigation bar buttons not grey out when clicked. + * Essentially making nav bar buttons not react to clicks, only hover events. + */ +.navbar .nav li.dropdown.open > .dropdown-toggle { + background-color: transparent; +} + +/** + * Made the active tab caption blue. Otherwise the active tab is black, and inactive tab is blue. + * That looks weird. Changed the colors to active - blue, inactive - black, and + * no color change on hover. + */ +.nav-tabs > .active > a, .nav-tabs > .active > a:hover { + color: #08c; +} + +.nav-tabs > li > a, .nav-tabs > li > a:hover { + color: #333; +} \ No newline at end of file ---------------------------------------------------------------------- new file mode 100644 index 0000000..2a32910 --- /dev/null @@ -0,0 +1,122 @@ +/* +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + +Unless required by applicable law or agreed to in writing, +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +*/ +/*============================================================================= + Navbar at the top of the page +=============================================================================*/ + +/* Padding at top because of the fixed navbar. */ +body { +} + +/* Our logo. */ +.navbar-logo { + padding: 5px 15px 5px 15px; +} +.navbar-logo img { + height: 40px; +} + +.navbar-default .navbar-nav > li > a { + color: black; + font-weight: bold; +} +.navbar-default .navbar-nav > li > a:hover { + background: #E7E7E7; +} + + color: black; +} + +.version { + display: block-inline; + font-size: 90%; +} + +/*============================================================================= + Navbar at the side of the page +=============================================================================*/ + +/* Move the side nav a little bit down to align with the main heading */ +#markdown-toc { + font-size: 90%; +} + +/* Custom list styling */ +#markdown-toc, #markdown-toc ul { + list-style: none; + display: block; + position: relative; + margin-bottom: 0; +} + +/* All element */ +#markdown-toc li > a { + display: block; + border: 1px solid #E5E5E5; + margin:-1px; +} +#markdown-toc li > a:hover, +#markdown-toc li > a:focus { + text-decoration: none; + background-color: #eee; +} + +/* 1st-level elements */ +#markdown-toc > li > a { + font-weight: bold; +} + +/* 2nd-level element */ +#markdown-toc > li li > a { + padding-left: 20px; /* A little more indentation*/ +} + +/* >= 3rd-level element */ +#markdown-toc > li li li { + display: none; /* hide */ +} + +#markdown-toc li:last-child > a { + border-bottom: 1px solid #E5E5E5; +} + +/*============================================================================= + Text +=============================================================================*/ + +h2, h3 { + border-bottom: 1px solid #E5E5E5; +} + + +code { + background: none; + color: black; +} + +pre { + font-size: 85%; +} ---------------------------------------------------------------------- diff --git a/docs/page/css/syntax.css b/docs/page/css/syntax.css new file mode 100644 index 0000000..ba3c0ba --- /dev/null +++ b/docs/page/css/syntax.css @@ -0,0 +1,79 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * + * Unless required by applicable law or agreed to in writing, + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations +**/ + +.highlight { background: #ffffff; } +.highlight .c { color: #999988; font-style: italic } /* Comment */ +.highlight .err { color: #a61717; background-color: #e3d2d2 } /* Error */ +.highlight .k { font-weight: bold } /* Keyword */ +.highlight .o { font-weight: bold } /* Operator */ +.highlight .cm { color: #999988; font-style: italic } /* Comment.Multiline */ +.highlight .cp { color: #999999; font-weight: bold } /* Comment.Preproc */ +.highlight .c1 { color: #999988; font-style: italic } /* Comment.Single */ +.highlight .cs { color: #999999; font-weight: bold; font-style: italic } /* Comment.Special */ +.highlight .gd { color: #000000; background-color: #ffdddd } /* Generic.Deleted */ +.highlight .gd .x { color: #000000; background-color: #ffaaaa } /* Generic.Deleted.Specific */ +.highlight .ge { font-style: italic } /* Generic.Emph */ +.highlight .gr { color: #aa0000 } /* Generic.Error */ +.highlight .gh { color: #999999 } /* Generic.Heading */ +.highlight .gi { color: #000000; background-color: #ddffdd } /* Generic.Inserted */ +.highlight .gi .x { color: #000000; background-color: #aaffaa } /* Generic.Inserted.Specific */ +.highlight .go { color: #888888 } /* Generic.Output */ +.highlight .gp { color: #555555 } /* Generic.Prompt */ +.highlight .gs { font-weight: bold } /* Generic.Strong */ +.highlight .gu { color: #aaaaaa } /* Generic.Subheading */ +.highlight .gt { color: #aa0000 } /* Generic.Traceback */ +.highlight .kc { font-weight: bold } /* Keyword.Constant */ +.highlight .kd { font-weight: bold } /* Keyword.Declaration */ +.highlight .kp { font-weight: bold } /* Keyword.Pseudo */ +.highlight .kr { font-weight: bold } /* Keyword.Reserved */ +.highlight .kt { color: #445588; font-weight: bold } /* Keyword.Type */ +.highlight .m { color: #009999 } /* Literal.Number */ +.highlight .s { color: #d14 } /* Literal.String */ +.highlight .na { color: #008080 } /* Name.Attribute */ +.highlight .nb { color: #0086B3 } /* Name.Builtin */ +.highlight .nc { color: #445588; font-weight: bold } /* Name.Class */ +.highlight .no { color: #008080 } /* Name.Constant */ +.highlight .ni { color: #800080 } /* Name.Entity */ +.highlight .ne { color: #990000; font-weight: bold } /* Name.Exception */ +.highlight .nf { color: #990000; font-weight: bold } /* Name.Function */ +.highlight .nn { color: #555555 } /* Name.Namespace */ +.highlight .nt { color: #000080 } /* Name.Tag */ +.highlight .nv { color: #008080 } /* Name.Variable */ +.highlight .ow { font-weight: bold } /* Operator.Word */ +.highlight .w { color: #bbbbbb } /* Text.Whitespace */ +.highlight .mf { color: #009999 } /* Literal.Number.Float */ +.highlight .mh { color: #009999 } /* Literal.Number.Hex */ +.highlight .mi { color: #009999 } /* Literal.Number.Integer */ +.highlight .mo { color: #009999 } /* Literal.Number.Oct */ +.highlight .sb { color: #d14 } /* Literal.String.Backtick */ +.highlight .sc { color: #d14 } /* Literal.String.Char */ +.highlight .sd { color: #d14 } /* Literal.String.Doc */ +.highlight .s2 { color: #d14 } /* Literal.String.Double */ +.highlight .se { color: #d14 } /* Literal.String.Escape */ +.highlight .sh { color: #d14 } /* Literal.String.Heredoc */ +.highlight .si { color: #d14 } /* Literal.String.Interpol */ +.highlight .sx { color: #d14 } /* Literal.String.Other */ +.highlight .sr { color: #009926 } /* Literal.String.Regex */ +.highlight .s1 { color: #d14 } /* Literal.String.Single */ +.highlight .ss { color: #990073 } /* Literal.String.Symbol */ +.highlight .bp { color: #999999 } /* Name.Builtin.Pseudo */ +.highlight .vc { color: #008080 } /* Name.Variable.Class */ +.highlight .vg { color: #008080 } /* Name.Variable.Global */ +.highlight .vi { color: #008080 } /* Name.Variable.Instance */ +.highlight .il { color: #009999 } /* Literal.Number.Integer.Long */ ---------------------------------------------------------------------- diff --git a/docs/page/favicon.ico b/docs/page/favicon.ico new file mode 100644 index 0000000..34a467a Binary files /dev/null and b/docs/page/favicon.ico differ ---------------------------------------------------------------------- new file mode 100644 index 0000000..35b8673 --- /dev/null @@ -0,0 +1,17 @@ +All image files in the folder and its subfolders are +licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + +Unless required by applicable law or agreed to in writing, +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations \ No newline at end of file ---------------------------------------------------------------------- diff --git a/docs/page/img/navbar-brand-logo.jpg b/docs/page/img/navbar-brand-logo.jpg new file mode 100644 index 0000000..5993ee8 Binary files /dev/null and b/docs/page/img/navbar-brand-logo.jpg differ ---------------------------------------------------------------------- diff --git a/docs/page/img/quickstart-example/compiler-webclient-new.png b/docs/page/img/quickstart-example/compiler-webclient-new.png new file mode 100644 index 0000000..e60689e Binary files /dev/null and b/docs/page/img/quickstart-example/compiler-webclient-new.png differ ---------------------------------------------------------------------- diff --git a/docs/page/img/quickstart-example/jobmanager-running-new.png b/docs/page/img/quickstart-example/jobmanager-running-new.png new file mode 100644 index 0000000..6255022 Binary files /dev/null and b/docs/page/img/quickstart-example/jobmanager-running-new.png differ
2018-09-26 05:37:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40826329588890076, "perplexity": 13432.785974590715}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267163326.85/warc/CC-MAIN-20180926041849-20180926062249-00468.warc.gz"}
http://christopherdanielson.wordpress.com/category/talking-math-with-your-kids/
# Category Archives: Talking math with your kids ## Armholes (6-year old topology) We were packing for a trip recently. I have developed a system for getting the kids packed. It is beautiful. Here’s how it works: 1. Send kids to basement to get suitcases. 2. Keep suitcases on first floor. 3. Send kids upstairs to get one type of item at a time. E.g. Three pairs of underpants. Then three pairs of socks. Et cetera. 4. Kids throw each type of item in the suitcase. 5. Repeat steps 3 and 4 as often as necessary. 6. Done. Seriously. It’s awesome. I made an observation with Tabitha partway through. Me: Isn’t it strange how a pair of socks is two socks, but a pair of underpants is only one thing? Tabitha (six years old): Yeah. It should “a pair plus one” because there are three holes. Me: Wow. I hadn’t thought of that. So how many holes does a shirt have? T: Three….No four! Me: How do you figure? T: The one you put your head through, the arms, and the head hole. If you are like me, you may be a bit behind the curve on her language here. “The one you put your head through” is the one that ends up at your waist once your shirt is on. I had to think about this for a moment. A few days later, I was curious to probe her thinking a bit further. She was getting dressed (a process which is always slow, and occasionally very frustrating for the parents): Me: Do you remember how you said a pair of underpants has three holes and a shirt has four? T: Ha! Yeah! Me: I was thinking about that and wondering whether there are any kinds of clothing that have one hole or two holes. T: Socks have one hole! Me: Oh. Nice. Sometimes Daddy’s socks have two holes, though. T: Yeah. When they’re broken. By this time, she finally has the underpants on and her pants are being slowly pulled on. Me: Wait. You need socks! She goes to her dresser and proceeds to sort through the very messy sock drawer. T: There are no matches. I find what appears to be two socks balled up together. T: No! Those aren’t socks! Those are for putting over tights to keep your legs warm. We look at each other. Big smile. TThose have two holes! ## Summer project The Minnesota State Fair is a fabulous event (Twelve days of fun ending Labor Day!). Rachel and I love the Fair, and we have passed this love along to our children. Griffin must have been thinking about the wonders of the State Fair as summer slowly (oh, so slowly!) unfolded on our fair state. He asked a question at breakfast one recent morning. Griffin (eight years old): How tall is the Giant Slide? Me: Good question. I would guess…40 feet. What’s your guess? G: 45 feet. OK. That’s a mistake. We should have written our guesses down privately to avoid influencing each other. Oh well. Me: Let’s look it up. Google returns nothing useful. It does return this awesome video, though, which we watch together. Me: I found lots of information mentioning the Giant Slide, but nothing on its height. G: Measure it yourself, then! Me: Good idea. How should we do that? G: We’re gonna need a lot of tape measures put together. This will be a summer project for us: Measuring stuff without putting a ruler next to it. I’ll report on our progress in this space. ## Zero=half revisited A few weeks back, Tabitha asked Why are zero and half the same? I was curious to know whether that conversation had affected her thinking in any way. So I asked. Me: Tabitha, do you still think zero and half are the same? Or have you not thought about that in a while? Tabitha (six years old): I think…Half isn’t a number. I mean, it’s made of numbers put together, but it’s not a number. Me: What is a number? I love this question. How people answer it can be revealing. I asked a version of it of Griffin when he was in Kindergarten. T: $4\frac{1}{2}$ is a number. Me: Oh? $4\frac{1}{2}$ is a number, but not one-half? T: Yeah. But it doesn’t really get used. Me: What do you mean by that? T: Well, people say, 1, 2, 3, 4, 5, 6, but not $4\frac{1}{2}$. Me: Oh. So when we count count, we skip over $4\frac{1}{2}$? T: Yeah. We are both silent for a few moments, thinking. T: Zero, too. People don’t count starting at zero. They say 1, 2, 3… Me: Yeah. Isn’t that funny? T: It should go half, zero, 1, 2, 3… It seems clear that has indeed been thinking about that conversation. She is struggling with the betweenness of $\frac{1}{2}$; that it expresses a number between 0 and 1. ## Division and fractions with a third grader I found some notes on a conversation I had with Griffin last fall. I do not remember the context for it. Me: Do you know what 12÷2 is? Griffin (8 years old): 6 Me: How do you know that’s right? G: 2 times 6 is 12. Me: What about 26÷2? G: 13 Me: How do you know that? G: There were 26 kids in Ms. Starr’s class [in first grade],  so it was her magic number. We had 13 pairs of kids. Me: What about 34÷2? G: Well, 15 plus 15 is 30…so…19 Here we see the role of cognitive load on mental computation. Griffin is splitting up 34 as 30 and 4 and finding pairs to add to each. Formally, he’s using the distributive property: $2(a+b)=2a+2b$. He wants to choose $a$ and $b$ so that $2a+2b=30+4$. But by the time he figures out that $a=15$, he loses track of the fact that $2b=4$ and just adds 4 to 15. At least, I consider this to be the most likely explanation of his words. My notes on the conversation only have (back and forth), which indicates that there was some follow-up discussion in which we located and fixed the error. The details are lost to history. Our conversation continued. Me: So 12÷2 is 6 because 2×6 is 12. What is 12÷1? G: [long pause; much longer than for any of the first three tasks] 12. Me: How do you know this? G: Because if you gave 1 person 12 things, they would have all 12. Let’s pause for a moment. This is what it means to learn mathematics. Mathematical ideas have multiple interpretations which people encounter as they live their lives. It is (or should be) a major goal of mathematics instruction to help people reconcile these multiple interpretations. Griffin has so far relied upon three interpretations of division: (1) A division statement is equivalent to a multiplication statement (the fact family interpretation, which is closely related to thinking of division as the inverse of multiplication), (2) Division tells how many groups of a particular size we can make (Ms. Starr’s class has 13 pairs of students—this is the quotative interpretation of division) and (3) Division tells us how many will be in each of a particular number of same-sized groups (Put 12 things into 1 group, and each group has 12 things). This wasn’t a lesson on multiplication, so I wasn’t too worried about getting Griffin to reconcile these interpretations. Instead, I was curious which (if any) would survive being pushed further. Me: What is $12 \div \frac{1}{2}$? G: [pause, but not as long as for 12÷1] Two. Me: How do you know that? G: Half of 12 is 6, and 12÷6 is 2, so it’s 2. Me: OK. You know what a half dollar is, right? G: Yeah. 50 cents. Me: How many half dollars are in a dollar? G: Two. Me: How many half dollars are in 12 dollars? G: [long thoughtful pause] Twenty-four. Me: How do you know that? G: I can’t say. Me: One more. How many quarters are in 12 dollars? G: Oh no! [pause] Forty-eight. Because a quarter is half of a half and so there are twice as many of them as half dollars. 2 times 24=48. ## A kindergartener on units [Talking math with your parents] The following conversation took place in my house the other day. Tabitha (6) had been informed by her mother that she (Tabitha) needed to eat something healthy before eating a chocolate-covered donut. I was—and remain—ignorant of the origins of this donut. I came in partway through the conversation. Rachel: I’m going to cut you a small slice of this apple. Tabitha (6 years old): Do I have to eat the whole thing? R: The whole apple? No. T: No, the whole slice! R: Yes! If you are unaware of the fun we have had with units around our house, you may wish to check out our discussion of brownies, and (of course) the following.
2013-06-18 22:24:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3641558289527893, "perplexity": 2418.4152381453773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00066-ip-10-60-113-184.ec2.internal.warc.gz"}
http://metwiki.net/viewProgramDetail?index=538&type=view
MinimizeDFA Program Information Name: MinimizeDFA Domain: Algorithm Functionality: Transforming a given deterministic finite automaton (DFA) into an equivalent DFA that has minimum number of states based on Hopcroft’s algorithm. Input: M=(Q,E,S,q,F) Q:A finite set of states(Type: Set) E:The input alphabet(Type: Alphabet) S:The transition function $Q \times E \longrightarrow Q$(Type: Function) q:The initial state. F:The set of final states which is the subset of Q(Type: Set) Output: $G=(Q_g,E,S_g,q,F_g)$ Reference How Effectively Does Metamorphic Testing Alleviate the Oracle Problem? http://dx.doi.org/10.1109/TSE.2013.46 MR Information MR1------ Description: Property: Source input: $M_s$ Source output: $G_s$ Follow-up input: $M_f$ Follow-up output: $G_f$ Input relation: $M_f = G_s$ Output relation: $G_f = G_s$ Pattern: MR2------ Description: Property: Source input: $M_s$ Source output: $G_s$ Follow-up input: $M_f$ Follow-up output: $G_f$ Input relation: $M_f$ = Adding a new state s to $M_s$,where s is duplicate to one state in $M_s$. Output relation: $G_f = G_s$ Pattern: MR3------ Description: Property: Source input: $M_s$ Source output: $G_s$ Follow-up input: $M_f$ Follow-up output: $G_f$ Input relation: $M_f$ = Adding a new state s to $M_s$,where for a state $s_i \in Q$,$\forall$ input $e_j$,if $S(s_i,e_j)$ is not equivalent to $s_i$, $S(s,e_j) = S(s_i,e_j)$; otherwise, $S(s,e_j) = s$. Output relation: $G_f = G_s$ Pattern: MR4------ Description: Property: Source input: $M_s$ Source output: $G_s$ Follow-up input: $M_f$ Follow-up output: $G_f$ Input relation: $M_f$ = Adding a new state s to $M_s$,where for a state $s_i \in Q$,$\forall$ input $e_j$,if $S(s_i,e_j)$ is not equivalent to $s_i$, $S(s,e_j) = S(s_i,e_j)$; otherwise, $S(s,e_j) = S(s_i,e_j)$ and then set $S(s_i,e_j) = s$. Output relation: $G_f = G_s$ Pattern: MR5------ Description: Property: Source input: $M_s$ Source output: $G_s$ Follow-up input: $M_f$ Follow-up output: $G_f$ Input relation: $M_f$ = Adding a new state s to $M_s$,where for a state $s_i \in Q$, $\forall$ input $e_j$,if $S(s_i,e_j)$ is not equivalent to $s_i$, $S(s,e_j) = S(s_i,e_j)$; otherwise, $S(s,e_j)$ is equal to an equivalent state of $s_$i. Output relation: $G_f = G_s$ Pattern: MR6------ Description: Property: Source input: $M_s$ Source output: $G_s$ Follow-up input: $M_f$ Follow-up output: $G_f$ Input relation: $M_f$ = Adding a new state s and a new input e to $M_s$,where $\forall$ state $s_i$ in a whole set of equivalent states, $S(s_i,e) = s$. Output relation: $Q^f_g = Q^s_g \cup \{s\}$ $E_f = E_s$ $S^f_g = S^s_g \setminus \{transition functions related to the new input e\}$ $q_f = q_s$ $F^f_g = F^s_g$ Pattern: MR7------ Description: Property: Source input: $M_s$ Source output: $G_s$ Follow-up input: $M_f$ Follow-up output: $G_f$ Input relation: $M_f$ = Adding a new state s to $M_s$,where \forall state $s_i$ in a whole set of equivalent states, for one input $e_j$,$S(s_i,e_j) = s$. Output relation: $s \in Q^f_g$ The set of equivalent states will remain unchanged. Pattern: MR8------ Description: Property: Source input: $M_s$ Source output: $G_s$ Follow-up input: $M_f$ Follow-up output: $G_f$ Input relation: $M_f$ = Adding a new state s to $M_s$, where s does not have any incoming or outgoing transition with the existing states. Output relation: $Q^f_g=Q^s_g \cup \{s\}$ $E_f=E_s$ $S^f_g=S^s_g$ $q_f=q_s$ $F^f_g=F^s_g$ Pattern: MR9------ Description: Property: Source input: $M_s$ Source output: $G_s$ Follow-up input: $M_f$ Follow-up output: $G_f$ Input relation: $M_f$ = In $M_s$, for a state $s_i$ and an input $e_j$,if $S(s_i,e_j)$ is equivalent/non-equivalent to $s_i$, change the transition such that $S(s_i,e_j)$ is non-equivalent/equivalent to $s_i$. Output relation: $s_i$ will become non-equivalent to its previous equivalent states. Pattern: MR10------ Description: Property: Source input: $M_s$ Source output: $G_s$ Follow-up input: $M_f$ Follow-up output: $G_f$ Input relation: $M_f$ = In $M_s$, for a state $s_i$ that has at least two incoming transitions from other states, add a new state s, where s has the same outgoing transitions as $s_i$, but some previous incoming transitions will be transferred to s, while other incoming transitions remain unchanged. Output relation: $G_f = G_s$ Pattern: MR11------ Description: Property: Source input: $M_s$ Source output: $G_s$ Follow-up input: $M_f$ Follow-up output: $G_f$ Input relation: $M_f$ = Adding a new state s to $M_s$, where for a state $s_i \in Q$, \forall input $e_j$, if $S(s_i,e_j)$ is not equivalent to $s_i$,$S(s,e_j)$ = $S(s_i,e_j)$;otherwise, $S(s,e_j) = s_i$. Output relation: $G_f = G_s$ Pattern: MR12------ Description: Property: Source input: $M_s$ Source output: $G_s$ Follow-up input: $M_f$ Follow-up output: $G_f$ Input relation: $M_f$ = Adding a new state s to $M_s$, where for a state $s_i \in Q$, \forall input $e_j$, if $S(s_i,e_j)$ is equal to an equivalent state of $S(s_i,e_j)$;otherwise, $S(s,e_j) = s_i$. Output relation: $G_f = G_s$ Pattern: MR13------ Description: Property: Source input: $M_s$ Source output: $G_s$ Follow-up input: $M_f$ Follow-up output: $G_f$ Input relation: $M_f$ = Swapping two equivalent states $s_i$ and $s_j$ of $M_s$. Output relation: $G_f = G_s$ Pattern: MR14------ Description: Property: Source input: $M_s$ Source output: $G_s$ Follow-up input: $M_f$ Follow-up output: $G_f$ Input relation: $M_f$ = Deleting an outgoing transition of a state $s_i$ in $M_s$. Output relation: $s_i$ will become non-equivalent to its previous equivalent states. Pattern: MR15------ Description: Property: Source input: $M_s$ Source output: $G_s$ Follow-up input: $M_f$ Follow-up output: $G_f$ Input relation: $M_f$ = Changing the order of inputs in $M_s$. Output relation: $G_f$ is equal to the minimal DFA that is constructed by changing the order of inputs in $G_s$ in the same way. Pattern: MR16------ Description: Property: Source input: $M_s$ Source output: $G_s$ Follow-up input: $M_f$ Follow-up output: $G_f$ Input relation: $M_f$ = Adding a new state s and a new input e to $M_s$, where among a whole set of equivalent states, for one state $s_i$, $S(s_i,e) = s$, while for all other states $s_j$,$S(s_j,e) = S(s_j,e_{1})$. Output relation: $s_i$ will become non-equivalent to its previous equivalent states. The new state will appear in $G_f$. Pattern: Insert title here
2019-08-21 12:11:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5219283103942871, "perplexity": 6740.1364168179525}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315936.22/warc/CC-MAIN-20190821110541-20190821132541-00189.warc.gz"}
https://codereview.stackexchange.com/questions/132620/calculating-quiz-score-with-weights-and-partial-credit
# Calculating quiz score, with weights and partial credit There is a quiz-game with simple rules. You have to guess one of three options. Each option has a particular weight, for example: • Option 1: 20 points • Option 2: 30 points • Option 3: 50 points If player guesses the right option, he gets all amount of points. If not, he gets a part of those points: User choice || Right Answer || Score 1 1 100% 2 1 75% 3 1 50% 1 2 75% 2 2 100% 3 2 75% 1 3 25% 2 3 50% 3 3 100% You can easilly write a method to calculate score: calcRateScore: function(fact, user) { var rateScore = 0; switch (fact) { case 1: switch (user) { case 1: rateScore = 20; break; case 2: rateScore = 20 * 0.75; break; case 3: rateScore = 20 * 0.5; break; } break; case 2: switch (user) { case 1: rateScore = 30 * 0.75; break; case 2: rateScore = 30; break; case 3: rateScore = 30 * 0.75; break; } break; case 3: switch (user) { case 1: rateScore = 50 * 0.25; break; case 2: rateScore = 50 * 0.5; break; case 3: rateScore = 50; break; } break; } return rateScore; } It works, but it looks horrible. How do I get rid of all this switch statements? • Sorry, I don't get the percent / points logic.. Why gets the user in the second row of your table 75% (75% of what ?) Jun 21 '16 at 14:10 • @webdeb 75% of the option "weight" Jun 21 '16 at 14:18 • The result depends on the answer * fractionRulesWithinTheAnswer Jun 21 '16 at 14:18 • The fact is, you have a manyToMany relationship, where the fractionRules are related to the other possible answers, but the fractions dont share the same logic across, instead they have their own logic applied to each.. isn't it overcomplicated? maybe it would make sense to rethink your model? Jun 21 '16 at 14:23 You don't seem to have any concept of objects or other reusable code in what you have written. I would take a step back and think about what real-word objects you are trying to model. At a minimum, I would think you would need three different concepts: • Answer Option: a single answer option that will be related to the question and hold logic on it fractional value as an answer. • Question: which stores a related set of answers and provides logic on calculating score for the question. • Quiz: which represents an ordered collection of questions. Let me start by modeling the objects noted above: // let's build a question class function Question(text, baseScore) { this.text = null; this.baseScore = null; this.setText(text); this.setBaseScore(baseScore); } // add methods to question class Question.prototype = { setText: function(text) { this.text = text; // return the question object to allow for chaining return this; }, setBaseScore: function(score) { this.baseScore = score; return this; }, return null; } return this; } return this; }, getScore: function() { if(this.selectedAnswerIndex === null || this.baseScore === null) { return null; } if(scoreModifier === null) { return null; } return this.baseScore * scoreModifier; } }; this.text = null; this.questionScoreModifier = null; this.setText(text); this.setScoreModifier(modifier); }; setText: function(text) { this.text = text; return this; }, setScoreModifier(value) { this.questionScoreModifier = value; return this; } }; // build quiz class function Quiz(title) { this.title = null; this.questions = []; this.score = null; this.setTitle(title); } // quiz class methods Quiz.prototype = { setTitle: function(text) { this.title = text; return this; }, if(question instanceof Question) { console.log('Give me a Question object!'); return null; } this.questions.push(question); return this; }, return this; }, getTotalScore: function() { var total = 0; for(i=0; i<this.questions.length; i++) { var questionScore = this.questions[i].getScore(); if (questionScore !== null) { total +== questionScore; } } this.score = total; } } You now have the basic building blocks to create your quiz. You could include this code anywhere and have a reusable quiz. Now you need to build the quiz itself. That may look like this: // build your quiz var myQuiz = new Quiz('My cool quiz'); // now let build some questions var question1 = new Question( 'This is text of question 1. What is correct answer?', 50 ); // now attach answers to the question question1 // add the question to the quiz // you could continue code like above to fill out your quiz Finally, we can start working with the quiz to set answers and get scores. // now let's set a user answer on a question // we can assume that we get both the question index // and the answer index for the question from elsewhere in javascript // for example from clicking on answer selection var questionIdx = {some value}; // we can also get total quiz score var totalQuizScore = myQuiz.getTotalScore(); // or we can get scores on individual questions var question1Score = myQuiz.questions[0].getScore(); Note that the outcome here is very reusable code that could be pretty much dropped anywhere within a larger application to implement your quiz. In a real application you may also add something like a quiz rendering class to be able to render the quiz using javascript (and to separate display of quiz from core quiz objects). It also makes it much easier to modify your code in the future. You need to change how questions are scored? - well just change the scoring logic within the answer and question classes as implemented in those class properties and methods. As long as you keep the contract (i.e method calls) with the quiz, the quiz class itself would likely not need to be modified. The original task was just to get rid of switch/case statements. Given that rules of the game are never going to be changed, I decided to leave them hardcoded. The code code I provided in the question was just a method of a class that handles my quiz-game. So they are other methods that handle other checks and calculations already. Keeping it simple, I decided to create an array with objects, which represent each of my option: var answers = [ null, { weight: 20, fractions: [0, 1, 0.75, 0.5] }, { weight: 30, fractions: [0, 0.75, 1, 0.75] }, { weight: 50, fractions: [0, 0.25, 0.5, 1] } ]; calcRateScore: function(fact, user) { • Sure, it looks more accurate.. But I would pass the answers as a param to the function.. and do only the calculation in the function. Jun 21 '16 at 14:28
2021-09-24 02:47:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30596303939819336, "perplexity": 4328.2730033502285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057496.18/warc/CC-MAIN-20210924020020-20210924050020-00202.warc.gz"}
https://www.doubtnut.com/question-answer/show-that-addition-subtraction-and-multiplication-are-binary-operations-on-r-but-division-is-not-a-b-642782957
# Show that addition, subtraction and multiplication are binary operations on R, but division is not a binary operation on R. Further, show that division is a binary operation on the set R^(**) of nonzero real numbers. Updated On: 17-04-2022 Get Answer to any question, just click a photo and upload the photo and get the answer completely free,
2022-05-27 15:50:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5935094952583313, "perplexity": 807.2833928373384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662658761.95/warc/CC-MAIN-20220527142854-20220527172854-00462.warc.gz"}
https://www.taylorfrancis.com/books/9780429503931/chapters/10.1201/9780429503931-11
chapter  5 6 Pages ## The Laws of Thermodynamics Thermal equilibrium of two systems. If two isolated systems A and B are brought into contact with each other, then the complete system A + B eventually goes into a state of thermal equilibrium. In this case it is said that the systems A and B are in a state of thermal equilibrium with each other. Each of the systems A and B individually is also in a state of thermal equilibrium. This equilibrium will not be broken if we remove the contact between the systems, and then after a while restore it. Consequently, if the establishment of contact between the two systems A and B, which previously were isolated, does not lead to any changes, then we can assume that these systems are in thermal equilibrium with each other (A ~ B).
2020-05-25 06:00:46
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8626125454902649, "perplexity": 123.91160576616035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347387219.0/warc/CC-MAIN-20200525032636-20200525062636-00054.warc.gz"}
http://math.stackexchange.com/questions/323334/what-was-the-first-bit-of-mathematics-that-made-you-realize-that-math-is-beautif/333817
# What was the first bit of mathematics that made you realize that math is beautiful? (For children's book) I'm a children's book writer and illustrator, and I want to to create a book for young readers that exposes the beauty of mathematics. I recently read Paul Lockhart's essay "The Mathematician's Lament," and found that I, too, lament the uninspiring quality of my elementary math education. I want to make a book that discredits the notion that math is merely a series of calculations, and inspires a sense of awe and genuine curiosity in young readers. However, I myself am mathematically unsophisticated. What was the first bit of mathematics that made you realize that math is beautiful? For the purposes of this children's book, accessible answers would be appreciated. - For me Euclid's proof of the infinitude of primes was the first thing that made me realize the beauty of mathematics. – Manjil P. Saikia Mar 7 '13 at 7:02 Wow. Just last night I had a fierce argument with one of the bartenders of my usual watering hole who is a mechanical engineering student. He insisted that he has a better idea than me of what is mathematics. I am so going to print him a copy of Lockhart's text. Thank you for that link! – Asaf Karagila Mar 7 '13 at 7:59 I can’t remember a time when I didn’t think that mathematics was beautiful and fascinating. – Brian M. Scott Mar 7 '13 at 15:06 Although I don't know if it's what you are looking for, try looking up "vihart" on youtube--Even if it's not helpful, I guarantee you will appreciate it. – Bill K Mar 8 '13 at 2:57 I think it's a shame that this question was voted closed... – Will Mar 10 '13 at 19:50 ## 153 Answers The first thing for me is the working of an equation. it is, to me, like a stanza of a poem that tells us many things in minimum words. No one would have ever thought of describing a geometrical figure. Every one used to draw it before math's entry in the real world. It's awesome for a mathematician to say that write me a circle, ellipse etc. In order to tell people that math is not only concerned to problem-solving, I have produced my own quote. " Practice is entirely hollow without understanding ". - Sufyan Sheikh - This is rather recent (Less than a year ago), but, since I am 14, I suppose it should still apply. I remember that I was bored in some class, and that I took out my calculator and started playing with it, writing "hello" with numbers upside down. Then I saw this button (this was a scientific calculator) that said "log," and so I pressed it. At first I received "error" for log(0) = -infinity (well, close enough), but then I tried other numbers, 1,2 10. Then I saw that at 10 it would blurt out 1, and at 100 2. I then realized that what log did was find the exponent of a number from a base number (of course, I didn't know that terminology then) but it was still pretty amazing. (I also learned later on that all calculators are log base 10) Edit: is there something wrong with this answer? Why was it down voted? - I don't find it beautiful, but I still find the idea expressed by the following something of a psychological curiosity: How can it be that when some algebraists say "AND" and "OR" they mean exactly the same thing? OR means this that "false or false" is false, "false or true", "true or false" as well as "true or true" are true, or more compactly: F T F F T T T T AND means this: F T F F F T F T But, since NOT(x OR y)=(NOT x AND NOT y) and NOT(T)=F and NOT(F)=T, OR and AND, to an algebraist, mean exactly the same thing! - Your answer implies that $\neg ( \perp \lor \top) \iff ( \neg \perp \land \neg \top) \iff ( \top \land \perp ) \iff ( \perp \lor \top)$. Your truth table for $\land$ is wrong. – Andrew Salmon Mar 23 '13 at 21:10 @AndrewSalmon Thanks, I don't know how I did that. – Doug Spoonwood Mar 24 '13 at 2:45 ## protected by Zev ChonolesMar 7 '13 at 22:43 Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site. Would you like to answer one of these unanswered questions instead?
2015-11-27 10:18:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7498586773872375, "perplexity": 1108.4967291458895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398448506.69/warc/CC-MAIN-20151124205408-00355-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.gamedev.net/forums/topic/446869-questions-about-transparents-meshes-alpha-blending-and-z-order-solved/
# OpenGL Questions about transparents meshes (alpha blending and z order) [solved] ## Recommended Posts riruilo    218 Hi friends. I need your ideas, please. I would like to implement transparent objets in my very small engine, OpenGL of course. I know I should draw firstly all my opaque objects, and after that, my transparent objects. But! from back to front, and that is my problem. How can I implement it. My idea is to take in account just one vertex per one mesh (not every triangles), my mesh class has a GetFirstVertx method. After do a traverse I can use glGetCurrentMatrix to retrieve the currrent matrix when I am in that object, but how can I get Z? If I get that my idea is to implement a method called SetZorder and GetZorder. After get Z, I think it should be easy, just order all my meshes and render them from back to front. Can anyone help me with my implementation questions? Thank you very much. [Edited by - riruilo on May 8, 2007 10:30:08 AM] ##### Share on other sites Krohm    5031 Forget about a so accurate sort. Besides killing your CPU, it still won't give correct results. Solving blend is a per-fragment problem. If you REALLY need "pixel-perfect" transparency, you should look at order independant transparency and depth peeling. In general, there are little subsets in which this is necessary. A per-OBB test would be enough. ##### Share on other sites Hodgman    51336 First you need a method that gives you the translation from model-space to world-space (the objects position). Just sort by the distance from the camera to the objects position. i.e. v = cam.position - obj.position;d = sqrt(v.x*v.x + v.y*v.y + v.z*v.z);then sort on d ##### Share on other sites zedz    291 btw u dont need to call the sqrt for the above function (esp if youre calling it for a lot of objs) ##### Share on other sites Rompa    307 Just be careful that if you sort by bounds/object centers, then a (large) object can be visible and have its center behind the camera (such as a large window) - in the distance from camera method above this will result in incorrect ordering. You may wish to consider adding testing againt the camera near plane to see if it is behind, and allow your algorithm to sort on negative numbers too. ##### Share on other sites riruilo    218 But I have 2 questions. First all, I am Opengl beginner ( and english), maybe my questions are a bit stupid. I have one arbitrary vertex of my model, in local coordinates, that vertex is transformed by modelview matrix. What operation should I do to get the transformed vertex? And just one question more. (possibly I am wrong) Before render my scene, I use glrotate and gltranslate with opposite values to move my world, that is to move the camera, but actually I am moving my world, not my camera. So, should I use this: v = cam.position - obj.position; d = sqrt(v.x*v.x + v.y*v.y + v.z*v.z); Simply I have no camera. Thanks a lot, I appreciate a lot your help. ##### Share on other sites jyk    2094 Quote: Original post by riruiloThanks for your replies.But I have 2 questions.First all, I am Opengl beginner ( and english), maybe my questions are a bit stupid.I have one arbitrary vertex of my model, in local coordinates, that vertex is transformed by modelview matrix. What operation should I do to get the transformed vertex?And just one question more. (possibly I am wrong)Before render my scene, I use glrotate and gltranslate with opposite values to move my world, that is to move the camera, but actually I am moving my world, not my camera. So, should I use this:v = cam.position - obj.position;d = sqrt(v.x*v.x + v.y*v.y + v.z*v.z);Simply I have no camera.Thanks a lot, I appreciate a lot your help. I would think the easiest approach would be to simply transform the position of each object into camera/view space, and then sort by z value. All you would really need would be: depth[i] = dot(object[i].pos - camera.pos, camera.forward); You should have your camera position available (since as you state, you're calling glTranslate() with the negative of this vector). Ideally you should have the forward vector available as well (or be able to derive it), but if you're relying exclusively on glRotate*() this may not be the case. Another option would be to query the modelview matrix (which at any time represents a transformation from local space to view space), apply it to your object origins, and sort based on the resulting z values. FYI, this sort of thing is a lot easier if you use an external math library, rather than relying exclusively on OpenGL's transform functions. ##### Share on other sites riruilo    218 Thanks jyk A math library? for what? for instance.... An other question, I have a transformation matrix and one vertex, what should I do to get the transformed vertex. Thanks a lot. ##### Share on other sites jyk    2094 Quote: A math library? for what? for instance... If you're asking why a math library can be useful when working with OpenGL, the answer (in short) is that it allows you to do things that are either difficult or impossible to do solely though OpenGL transform functions. Quote: An other question, I have a transformation matrix and one vertex, what should I do to get the transformed vertex. This is a good example :) OpenGL does not provide a means of 'manually' transforming geometry, or querying the geometry data after transformation, but if you have a decent math library available, this becomes quite easy. (You can do it indirectly via OpenGL, but you'd have to query the modelview matrix at the appropriate time, and then apply the transformation 'by hand' to the vertex in question.) ##### Share on other sites riruilo    218 Thanks jyk, you are very helpful for me. Just one question, I promise this is the LAST. I have the modelview matrix using glGetMatrix(modelview) and one vertex, how can I apply that transformation BY HAND to that vertex? or which theory should I know ( I´m beginnger) By the way, I need just that operation so maybe use a library for that is not useful, I think ( maybe I am wrong ) Thanks. ##### Share on other sites jyk    2094 Quote: I have the modelview matrix using glGetMatrix(modelview) and one vertex, how can I apply that transformation BY HAND to that vertex? or which theory should I know ( I´m beginnger) To multiply a vector by a matrix (representing an affine transform) using OpenGL conventions: x' = m[0] * x + m[4] * y + m[8] * z + m[12];y' = m[1] * x + m[5] * y + m[9] * z + m[13];z' = m[2] * x + m[6] * y + m[10] * z + m[14]; To learn more about what's going on here, Google 'matrix math', 'vector math', 'matrix multiplication', 'matrix vector multiplication', and similar terms. Or, just consult a good reference on linear algebra. Quote: By the way, I need just that operation so maybe use a library for that is not useful, I think ( maybe I am wrong ) The more 3-d programming you do, the more difficult it will become to get by without a math library. However, you can probably squeeze by without one for the time being if what you're doing isn't too involved. ##### Share on other sites riruilo    218 Thanks. Problem solved. ##### Share on other sites riruilo    218 Hi friends! It works nice. Thanks a lot. But just one thing: I only use glGetMatrix(modelview) and one vertex to calculate a transformed vertex with z' = m[2] * x + m[6] * y + m[10] * z + m[14]; I dont use any camera position, because there is no camera in opengl, and when aI transform my "virtual" camera, its transformations are stored in modelview, so I didnt use: depth[i] = dot(object[i].pos - camera.pos, camera.forward); A screeshoot, look at lights and window panes (glass): ## Create an account Register a new account • ### Similar Content • I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks. • There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window. Thanks • By cebugdev hi all, i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only), i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse. now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about. 1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection? 2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension. 3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question). lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free, Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework. IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work. thank you, and looking forward to positive replies. • I have a few beginner questions about tesselation that I really have no clue. The opengl wiki doesn't seem to talk anything about the details. What is the relationship between TCS layout out and TES layout in? How does the tesselator know how control points are organized? e.g. If TES input requests triangles, but TCS can output N vertices. What happens in this case? http://www.informit.com/articles/article.aspx?p=2120983 the isoline example TCS out=4, but TES in=isoline. And gl_TessCoord is only a single one. So which ones are the control points? How are tesselator building primitives? • By Orella I've been developing a 2D Engine using SFML + ImGui. Here you can see an image The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor. Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 3D Editor preview But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor. If you can provide code will be better. And if you want me to provide any specific code tell me. Thanks! • 14 • 20 • 35 • 13 • 39
2017-09-23 22:07:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18704019486904144, "perplexity": 2060.235762669537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689779.81/warc/CC-MAIN-20170923213057-20170923233057-00250.warc.gz"}
https://webwork.libretexts.org/webwork2/html2xml?answersSubmitted=0&sourceFilePath=Library/Hope/Calc1/00-00-Essays/GQ_Limits_05.pg&problemSeed=1234567&courseID=anonymous&userID=anonymous&course_password=anonymous&showSummary=1&displayMode=MathJax&problemIdentifierPrefix=102&language=en&outputformat=libretexts
True or False: As $x$ increases to $100$, $f(x)=1/x$ gets closer and closer to $0$, so the limit of $f(x)$ as $x$ increases to $100$ is $0$. In the answer box below, explain your reasoning for the choice of true or false you made above. Use complete sentences and correct grammar, spelling, and punctuation. Be specific and detailed. Write as if you were explaining the answer to someone else in class.
2022-07-07 17:41:39
{"extraction_info": {"found_math": true, "script_math_tex": 8, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9006627202033997, "perplexity": 170.72391255808972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104495692.77/warc/CC-MAIN-20220707154329-20220707184329-00671.warc.gz"}
https://api-project-1022638073839.appspot.com/questions/how-do-you-integrate-int-x-3-x-2-2x-5-dx-using-partial-fractions
# How do you integrate int (x-3)/(x^2-2x-5) dx using partial fractions? Sep 29, 2017 $\int \setminus \frac{x - 3}{{x}^{2} - 2 x - 5} \setminus \mathrm{dx} = \frac{1}{2} \ln | {x}^{2} - 2 x - 5 | - \frac{1}{\sqrt{6}} \ln | \frac{x - 1 - \sqrt{6}}{x - 1 + \sqrt{6}} | + C$ #### Explanation: We seek: $I = \int \setminus \frac{x - 3}{{x}^{2} - 2 x - 5} \setminus \mathrm{dx}$ If we look at the quadratic on the numerator, we finds that if ${x}^{2} - 2 x - 5 = 0 \implies x = 1 \pm \sqrt{6}$ Therefore we find that we do not get "perfect" factors but rather: $\left({x}^{2} - 2 x - 5\right) = \left(x - 1 - \sqrt{6}\right) \left(x - 1 + \sqrt{6}\right)$ So, although we could decompose into partial fractions it actually complicates the problem with the risk of an algebraic error. A another approach, is to complete the square of the denominator: $I = \int \setminus \frac{x - 3}{{\left(x - 1\right)}^{2} - 1 - 5} \setminus \mathrm{dx}$ $\setminus \setminus = \int \setminus \frac{x - 3}{{\left(x - 1\right)}^{2} - 6} \setminus \mathrm{dx}$ Substitute $u = x - 1 \implies \frac{\mathrm{du}}{\mathrm{dx}} = 1$; then $I = \int \setminus \frac{u - 2}{{u}^{2} - 6} \setminus \mathrm{du}$ $\setminus \setminus = \int \setminus \frac{u}{{u}^{2} - 6} - \frac{2}{{u}^{2} - 6} \setminus \mathrm{du}$ $\setminus \setminus = \int \setminus \frac{1}{2} \frac{2 u}{{u}^{2} - 6} - \frac{2}{{u}^{2} - {\sqrt{6}}^{2}} \setminus \mathrm{du}$ Both of these integrals are standard, so we can now integrate, giving: $I = \frac{1}{2} \ln | {u}^{2} - 6 | - 2 \frac{1}{2 \sqrt{6}} \ln | \frac{u - \sqrt{6}}{u + \sqrt{6}} | + C$ $\setminus \setminus = \frac{1}{2} \ln | {u}^{2} - 6 | - \frac{1}{\sqrt{6}} \ln | \frac{u - \sqrt{6}}{u + \sqrt{6}} | + C$ And, restoring the earlier substitution: $I = \frac{1}{2} \ln | {\left(x - 1\right)}^{2} - 6 | - \frac{1}{\sqrt{6}} \ln | \frac{\left(x - 1\right) - \sqrt{6}}{\left(x - 1\right) + \sqrt{6}} | + C$ $\setminus \setminus = \frac{1}{2} \ln | {x}^{2} - 2 x - 5 | - \frac{1}{\sqrt{6}} \ln | \frac{x - 1 - \sqrt{6}}{x - 1 + \sqrt{6}} | + C$
2021-10-21 05:11:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 14, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9751043915748596, "perplexity": 1542.6715676200643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00504.warc.gz"}
http://www.ganitcharcha.com/view-article-A-Brief-Introduction-of-Number-System.html
# A Brief Introduction of Number System Published by Ganit Charcha | Category - Math Articles | 2014-09-19 13:17:44 We provide in this note a brief account of our number system. Idea is to introduce different types of numbers and to discuss a few relevant concepts associated with them. Before we start, let us recall what Leopold Kronecker once told - God made the integers, all the rest is the work of man''. We therefore start with the introduction of integers. Integers Let us start at the beginning - with what a kid is introduced with Mathematics. His or her journey starts with Natural Numbers - $1, 2, 3, 4, \ldots$ etc with which he or she learns to count. They are also called Counting Numbers and these numbers served mankind for a long time. The difficulty was how to distinguish between $4$ and $40$. For many years, an empty space was used to depict that there is no digit here and that formed the basis of ancient place-value system until the concept of $0$ as a number is discovered. Today's modern decimal-based place value notation is attributed by ancient Indian mathematician Aryabhata who stated that from place to place each is ten times the preceding''. Therefore, with the introduction of number $0$, we have a new set of numbers $0, 1, 2, 3, 4, \ldots$  along which we can count in one direction. Counting in the opposite direction was meaningless and absurd in ancient days and in fact not known. The concept of debt'' or loss'' and the rules governing operations involving this debt'' with number $0$ is known from the work of ancient Indian mathematician Brahmagupta (A.D 628). So, we tend to learn the concept of negative numbers as anything which is less than 0 and this is how we arrive at the number line. We therefore have a bigger set of numbers $\ldots, -3, -2, -1, 0, 1, 2, 3, \ldots$ and introduce them as integers. Subsequently, the integers on the right side of $0$ are called positive integers and those on the left side of it are negative integers. Note that number line introduced above is also referred as Integer Line. The operations of addition, substraction, multiplication and division are defined on these numbers as a process of combining a pair of these numbers, what we call today as elementary arithmetic. Rational Numbers The word fraction borrowed from the Latin fractio'' means break and in ancient days was used to mean a part of a whole thing which is divided into some number of equal parts. In other words, these are unit fractions meaning one part of two ($1/2$), one part of three ($1/3$) and so on. This concept then got extended to a numerator divided by a denominator, as we mean by fraction today as a natural generalization of multiplying a unit fraction with a counting number. But the history of mathematics is all about asking relevant questions, and seeking the answers. The above extension naturally shows direction for a question, what if an unit fraction is multiplied by any integer. So, we get new form of numbers like $4/3, -15/6$ etc and we need a new name. We therefore, start to call these numbers as Rational Numbers and is defined as a fraction $p/q$ of two integers, where denominator $q$ is not $0$. Surprisingly, we discovered that it generalizes all numbers we have defined so far. Since integers are generalized by rational numbers, natural question therefore is can integer line be extended to have rational numbers represented geometrically. Answer is yes and to demonstrate this we consider two intersecting integer lines $P$ and $Q$. Let us now consider an integer $q$ on the line $Q.$ We join the point $q$ on $Q$ and point $1$ on $P.$ We then draw a line through $1$ on $Q$ parallel to the joining line and the point of intersection of this parallel line with line $P$ represents the unit fraction $1/q.$ And it is easy to see that this representation is correct, since the distance of this point on $P$ from point $0$ on $P$ is one of $q$ parts of the distance between points $0$ and $1$ on $P$. Same construction and argument holds true while representing the rational number $p/q,$ where $p$ and $q$ are both integers. Only exception is we need to start the construction by joining $q$ on $Q$ and $p$ on $P$. The number $p/q$ is given by the point of intersection of this parallel line and the line $P.$ Irrational Numbers A number $a$ multiplied by itself, i.e $a*a = a^2$ is the square of the number and the number $a$ is said to be the square root of $a^2$ and is depicted as $\sqrt{a}$. Pythagorean mathematician Hippasus of Metapontum landed to the problem of finding the length of the hypotenuse of an isosceles right angled triangle whose other two sides are of unit length and it is all about asking what is the square root of 2. We assume that root $\sqrt{2} = p/q$ where $p$ and $q$ have no common factor and that is the most generalized assumption that we could have made given our knowledge of numbers so far. The assumption $p^2 = 2.q^2$ leads us to the fact that both $p$ and $q$ are even, and so have a common factor $2$. This contradicts our initial assumption, and in turn proves the existence of another kind of numbers which are not rational. Mathematicians begin to refer them as Irrational Numbers. Real Numbers The rational and irrational numbers together are called Real Numbers and these are the numbers whose values we can visualize in this real world. The set of integers can be thought of as points on a line starting from a point called 0 and stretched to either direction indefinitely. The positive integers are on the right side of 0, whereas negative integers are on its left side. One part of two, i.e, unit fraction $1/2$ is the point on the line which bisects the line segment joining the integers 0 and 1. Since $1 < 2 < 4,$ so $1 < \sqrt{2} < 2$ and therefore irrational $\sqrt{2}$ can also be represented as some point between the points representing the integers $1$ and $2$. And in fact each and every rational and irrational numbers are points on that line and hence we start to refer it as Real Line as shown in the figure below. This will play critical role in understanding different properties and mathematical structures that deal with real numbers. Complex Numbers Any number multiplied by itself will always yield a positive number, so the square root of the number introduced above is applicable for positive numbers only. Natural question is therefore, what is the square root of a negative number. It can not be answered from our knowledge of numbers so far, so came the definition of Imaginary Numbers. The square root of $-1$ is defined as the number $i$, assuming it exists, such that $i*i = -1$. Therefore, square root of $-2$ is defined as $\sqrt{2}*i$ and so on. Therefore the definition of $i$ as $\sqrt{-1}$ is consistent and helps us to define square root of all negative numbers and the numbers involving $i$, as for example, $2i$, $\sqrt{2}i$, $3i$, $2i/3$, are called imaginary numbers. In 1545, Girolamo Cardano in his book Ars Magna solved the equation $x(10 -x) = 40$ as $5 + \sqrt{-15}$ and $5 - \sqrt{-15}$. Later in 1637, Rene Descartes proposed the standard form of numbers involving imaginary numbers as $a + bi$, where $a$ and $b$ are both real numbers. These numbers are referred today as complex numbers. But, both of them did not like the concept of complex numbers and thought it to be as useless. The symbol $i$, though we introduced earlier for the sake of clarity and readability, but actually was introduced much later by L. Euler in 1777. The fact that the algebraic identity $\sqrt{a}\sqrt{b} = \sqrt{ab}$ does hold only when $a$ and $b$ are positive and for negative $a$ and $b$ it lead to inconsistencies since $\sqrt{-1}.\sqrt{-1} = -1$. And this inconsistency led to the use of the symbol $i$ for $\sqrt{-1}$. In 1806, Jean-Robert Argand in his work titled Essay on the Geometrical Interpretation of Imaginary Quantities'' showed how to visualize complex numbers as points in a plane where real numbers are plotted against X-axis and purely imaginary numbers are plotted against Y-axis, aka, imaginary-axis. And $a + bi$ is the point $(a, b)$ in the that plane which is referred today as Argand Plane or Argand Diagram. Carl Friedrich Gauss in 1831, made Argand's idea popular and well accepted and also took Descartes $a + bi$ notation to refer it as Complex Numbers. Complex numbers are the highest generalization of numbers as it exists today, since it includes all real, purely imaginary numbers and any combinations of them. References 1. E. T. Bell, Men of Mathematics, New York: Simon and Schuster, 1986, ISBN 0-671-46400-0. 2. Craig Smorynski, History of Mathematics: A Supplement, ISBN 978-0-387-75481-9, Springer New york. 3. B. K. Lahiri and K. C. Roy, Real Analysis, World Press, 1988. 4. Orlando Merino, A Short History of Complex Numbers, http://www.math.uri.edu/~merino/spring06/mth562/ShortHistoryComplexNumbers2006.pdf
2018-12-19 12:17:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8130873441696167, "perplexity": 206.07833486631165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832259.90/warc/CC-MAIN-20181219110427-20181219132427-00442.warc.gz"}
https://ctan.org/ctan-ann/id/mailman.3152.1552327837.5014.ctan-ann@ctan.org
# CTAN update: diffcoeff Date: March 11, 2019 7:10:34 PM CET Andrew Parsloe submitted an update to the diffcoeff package. Version number: 3.1 2019-03-10 License type: lppl1.3c Summary description: Write differential coefficients easily and consistently Announcement text: Version 3.1 diffcoeff corrects a bug in the differential command \dl. It can now be used before forms like \vec{x}. This caused errors in version 3. Thanks for the upload. For the CTAN Team Manfred Lotz We are supported by the TeX user groups. Please join a users group; see http://www.tug.org/usergroups.html . ## diffcoeff – Write differential coefficients easily and consistently diffcoeff.sty allows the easy and consistent writing of ordinary, partial and other derivatives of arbitrary (algebraic or numeric) order. For mixed partial derivatives, the total order of differentiation is calculated by the package. Optional arguments allow specification of points of evaluation (ordinary derivatives), or variables held constant (partial derivatives), and the placement of the differentiand (numerator or appended). The package is built on xtemplate, allowing systematic fine-tuning of the display and generation and use of variant forms (like derivatives built from D, \Delta or \delta). A command for differentials ensures the dx used in e.g. integrals is consistent with the form used in derivatives. The package requires the 3 bundles l3kernel and l3packages. Package diffcoeff Version 3.2 2019-12-28 Copyright 2016–2019 Andrew Parsloe Maintainer Andrew Parsloe more
2020-08-08 12:16:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.854827880859375, "perplexity": 9506.963397696298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737645.2/warc/CC-MAIN-20200808110257-20200808140257-00101.warc.gz"}
https://mathoverflow.net/questions/221980/how-many-points-are-in-such-set-with-the-same-norm-2/221983
How many points are in such set with the same norm-2 Let $L=[a,b]\cap\mathbb{N}$ with $a,b\in\mathbb{N}$, let $D\in\mathbb{N}$, and let $C=L^D$. Then I would like to know how many points are there in $C$ with the same given norm-2 $d$. I.e., I'm looking for $|A|$, where $A = \{p\in C,\ ||p||_2=d\}$ • What kind of answer do you want? There won't be a simple nice closed formula. But for example, you might fix $a,b,d$ and ask for an estimate for the size of $|A|$ as $D\to\infty$. Or you might fix $D$ and let $a,b,d\to\infty$ in a suitable way. Of course, there are trivial cases, for example, if $a>d/\sqrt{D}$, then $A=\emptyset$. – Joe Silverman Oct 28 '15 at 21:24 • @JoeSilverman a nice-formula lower boundary will be the best option... I was thinking on using this for data compression but know I think it's not possible to apply it where I want. Having a lower limit will help me out proving that it is indeed stupid to keep researching on this path. – Carlos Navarro Astiasarán Oct 30 '15 at 0:28 The answer is the coefficient of $t^{d^2}$ in the generating function $\left( \sum_{j=a}^b t^{j^2}\right)^D$.
2020-11-26 21:50:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132792353630066, "perplexity": 176.71634697225022}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188947.19/warc/CC-MAIN-20201126200910-20201126230910-00059.warc.gz"}
https://www.physicsforums.com/threads/klein-gordon-linear-potential-solution.286370/
# Klein-Gordon linear potential solution 1. Jan 21, 2009 ### pellman I have an exact solution to the Klein-Gordon equation with linear potential. But I am only an amateur physics enthusiast with no incentive (or time) to do anything with it, nor familiarity enough with the physics to know if it is interesting and, if so, interesting to whom. It has been sitting on my desk for a couple of years now. It is a quite simple solution but not given in terms of energy eigenstates. It is the relativistic analog to a very simple solution of the linear potential Schrodinger case--which itself is so simple you could write it on the palm of your hand, but which appears to be generally unknown since it also is not in terms of energy eigenstates and so (I presume) not useful. If you are interested in having it to further develop it into a paper or incorporate it into your research, let me know. If you are reading this any time after a week of posting this, send me a PM since I probably won't see replies to the discussion thread. Todd 2. Jan 21, 2009 ### Hans de Vries The solution(s) should simply be wavefunctions of accelerating particles. See for instance. "The Lorentz force from the Klein Gordon equation" http://www.physics-quest.org/Book_Lorentz_force_from_Klein_Gordon.pdf Which should become more evident if you take the charge-current density of your solution. \begin{aligned} &j^o ~~=~~~~ &\frac{i\hbar e}{2m}\left(~\psi^*\frac{\partial \psi}{\partial x^o}-\frac{\partial \psi^*}{\partial x^o}\psi ~ \right) ~~-~~ &\frac{e}{c}~\Phi~\psi^*\psi \\ &j^i ~~=~~ - &\frac{i\hbar e}{2m}\left(~\psi^*\frac{\partial \psi}{\partial x^i}-\frac{\partial \psi^*}{\partial x^i}\psi ~ \right) ~~-~~ &e\,A^i~\psi^*\psi \end{aligned} Regards, Hans 3. Jan 21, 2009 ### pellman A point of interest is that in both the Schrodinger and K-G cases the explicit solutions are NOT the same as a free particle viewed from a constantly accelerating frame. In other words, you can't make the solution for a linear potential -Fx look like the free particle solution by substituting x --> x - F/m . In the Schrodinger case though it is true that any expectation value is the same as the free particle case viewed from an accelerated frame, so the equivalence principle would still hold experimentally. I can't make the same claim for the Klein-Gordon case because .... I don't know what the probabilistic interpretation of a K-G wave is! (I think its still an open question.)
2018-05-26 16:08:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9127333164215088, "perplexity": 872.2715816102979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867559.54/warc/CC-MAIN-20180526151207-20180526171207-00405.warc.gz"}
https://socratic.org/questions/how-do-you-solve-5x-2-x-3-using-the-quadratic-formula
# How do you solve 5x^2 + x = 3 using the quadratic formula? Mar 24, 2016 $x = \frac{- 1 \pm \sqrt{61}}{10}$ #### Explanation: Given $a {x}^{2} + b x + c = 0$ $\implies x = \frac{- b \pm \sqrt{{b}^{2} - 4 a c}}{2 a}$ $5 {x}^{2} + x = 3 \implies 5 {x}^{2} + x - 3 = 0$ $\implies a = 5$ $\implies b = 1$ $\implies c = - 3$ $x = \frac{- 1 \pm \sqrt{{1}^{2} - 4 \cdot 5 \cdot - 3}}{2 \cdot 5}$ $x = \frac{- 1 \pm \sqrt{1 + 60}}{10}$ $x = \frac{- 1 \pm \sqrt{61}}{10}$
2021-01-15 23:19:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6512000560760498, "perplexity": 3333.8243270292705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703497681.4/warc/CC-MAIN-20210115224908-20210116014908-00678.warc.gz"}
https://math.stackexchange.com/questions/305606/does-there-exist-a-set-of-all-cardinals
# Does there exist a set of all cardinals? [duplicate] Does there exist set that contains all the cardinal numbers? • This has been asked several times before. Once recently. Feb 16, 2013 at 16:39 Assume $C$ was the set of all cardinals. Then $\bigcup C$ would be a cardinal exceeding all cardinals in $C$ which is a contradiction. • This would also imply that there is no function that maps n to $\aleph_n$? Feb 16, 2013 at 16:43 • @PyRulez The cardinals are not limited to the $\aleph_n$'s, they go way way beyond that. Feb 16, 2013 at 16:43 • Wait, how is that from n to $\aleph_n$? $f(3) \neq \aleph_3$! Feb 16, 2013 at 16:46 Set of cardinals is well ordered by $\in$. Now, as a corollary we get Burali-Forti theorem, which says that there is no the set of all ordinal numbers. As a corollary from a corollary we can prove that, there is no set that contains all the ordinals. Proof: Let $A$ be a set that contains all the ordinals. You can prove that $\{x\in A : x \ \text{is ordinal}\}$ is a set, which contradicts Burali-Forti theorem.
2023-04-01 22:57:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7769325375556946, "perplexity": 227.24289478376338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00281.warc.gz"}
https://sea-man.org/fire-hazards.html
. Site categories # Hydrocarbon Process Safety – Fire Hazards, Risks and Controls This article covers the following learning outcome: outline the fire hazards, risks and controls in Oil and Gas Industry. ## Lightning A lightning strike is a massive discharge of electricity from the atmosphere, where the electrical charge has built up, to the earth. The threat from lightning cannot be entirely eliminated, particularly with floating roof tanks where vapour is usually present around the rim seal. In these circumstances, measures to mitigate the consequences of a fire should be provided, including automatic rim seal fire extinguisher systems. Threats from a lightning strike include: • Sparks which can cause a fire or explosion; • Power surges to electrical equipment, particularly monitoring and safety devices which can render them inoperable. Protection from lightning strikes is a specialist area requiring expert knowledge as to what systems are suitable for each facility. However, in general they include the following: • A “dissipation array system” which reduces the potential between the site and any storm cloud cell that might be in the vicinity; • A grounding system called a “current collector”. This provides an electrically isolated area within which the facility will be located. This is normally made up of wire buried to a depth of about 25 centimetres and which surrounds the protected area. This wire is also connected to rods which are driven into the earth at about 10-metre intervals. Finally, the enclosed area is integrated by a net of cross-conductors which are also connected to any structures within the area, as well as the grounding system itself. This allows any current to discharge to earth safely; • Electrical surge suppression devices. These devices have two distinct functions to perform. First, to stop direct strikes within the facility, and second, to prevent fast-rising, high current surges. In general, the necessary precautions are: • To keep the lightning channelled far away from the immediate neighbourhood of flammable and explosive materials; • To avoid sparking or flashover in joints and clamps, and at nearby components; • To prevent the overheating of conductors; • To prevent flashover or sparking due to induced voltages; • To prevent raising the potential of the earth termination system; • All metal containers to be of sufficient thickness (usually 5 mm minimum); • Down-conductors fitted to all other metal structures and in sufficient numbers as to subdivide any current surge adequately; • All earthing systems to be interconnected to a single earth termination system. This usually takes the form of a mesh or grid pattern around the site. ## Fire triangle and potential consequences of explosions and thermal radiation Fire can be defined as ‘the rapid oxidization of a material or substance’. This is known as combustion, which releases light, heat and various reaction products such as smoke and gas. Fire is made up of three interdependent elements known as the fire triangle. These are: • Heat or a source of ignition; • Fuel; • Oxygen. This is known as the fire triangle. ### The fire triangle The fire triangle is a way of understanding the way in which these elements, which are necessary for most fires to burn, interrelate. The triangle shows that for burning to start (and to continue to do so) requires three elements: heat, fuel and an oxidizing agent (this is usually oxygen – but not always). A fire naturally occurs when these elements are brought together in the right proportions. A fire can be prevented or extinguished by removing any one of these elements. Similarly, fires can be prevented by the isolation of any one of these elements, particularly fuel and ignition. It is important to remember that it is only the vapour from a fuel that actually burns. Before combustion takes place, a solid or liquid must be heated to a point at which vapour is given off and that vapour can ignite. ### Explosions An explosion is a type of fire but one which combusts with such a rapid force that it causes an effect known as over-pressure (explosion). Under certain conditions, the speed of the front of the flame may move to a supersonic level, resulting in a significantly more powerful explosion. There are three types of explosion that are associated with the oil and gas industry. These are: 1. Boiling Liquid Expanding Vapour Explosion (BLEVE); 2. Confined Vapour Cloud Explosion (CVCE); 3. Unconfined Vapour Cloud Explosion (UVCE). We touched on these types of explosion in Hazards, Risks and Controls available for Safe Containment of Hydrocarbons this article but, because of their significance in the oil and gas industry, it’s worthwhile reminding ourselves of what these types of explosion are. ### Boiling Liquid Expanding Vapour Explosion (BLEVE) We will now look at a scenario whereby a Liquefied Petroleum Gas (LPG) storage vessel is exposed to an external source of heat – possibly a fire. The LPG is kept in a liquid state inside the vessel because it is held at a temperature below its boiling point, which means it takes up substantially less space than in its gaseous state. Anything which changes that state from a liquid to a gas, such as the external source of heat (fire), will increase the pressure inside the storage vessel to a potentially unsustainable level. At first the vapour will be vented out via the pressure relief valve on the top of the vessel. However, the rate of increase in pressure under these circumstances is likely to be unsustainable and the vessel is likely to eventually fail, with a consequential loss of containment. The resulting instantaneous release of LPG vapour will likely make contact with a source of ignition, resulting in a Boiling Liquid Expanding Vapour Explosion (BLEVE). Should a situation occur whereby a source of heat (fire or otherwise) begins to radiate itself onto an LPG storage vessel, the following action should be taken. Apart from removing or extinguishing the source of heat, the storage vessel, and any other storage vessels nearby, should be deluged with copious amounts of water to keep the metal cool. ### Confined Vapour Cloud Explosion (CVCE) A confined vapour cloud explosion is an explosion following a leak of vapour which occurs in a confined space, such as a building or a tank. ### Unconfined Vapour Cloud Explosion (UVCE) An unconfined vapour cloud explosion is an explosion following a leak of vapour which occurs in an unconfined space, outdoors. Thermal radiation is the transfer of heat from one source to another. This can be a structure or a person. Where the recipient source is a person, the consequences can be severe. The initial effect of exposure to a source of heat (fire) is to warm the skin. This then becomes painful as the amount of energy absorbed increases. Thereafter, second-degree burns begin to take effect, with the depth of burn increasing with time for a steady level of radiation. Ultimately, the full thickness of the skin will burn and the underlying flesh will start to be damaged, resulting in third-degree burns. When plant, including pipework and vessels, is exposed to thermal radiation the effect is the transfer of heat to the product inside the plant. This can change the characteristic of the product and make it less stable. These characteristics include the potential to make the product expand and/or increase the amount of vapour given off, amongst other things. This can result in loss of containment, with an ensuing vapour cloud explosion, jet fire, pool fire or running liquid fire. ## Electrostatic charges Whenever a liquid moves against a solid object, such as the inside of a pipe, it generates a static electrical charge. This is caused by ions (charged atoms) being transferred from the liquid to the surface of the pipe or vessel. The most common cause of static electricity build-up is where there is a flow (transfer) or movement (mixing process) of liquid within a process. The amount and rate of static generation can be dictated by a number of factors. These factors, or their elimination or reduction, can also be used to control the risks associated with static electrical generation. These include: • The conductivity of the liquid; • The amount of turbulence in the liquid; • The amount of surface area contacts between the liquid and other surfaces; • The velocity of the liquid; • The presence of impurities in the liquid; • The atmospheric conditions. Static build-up is enhanced when the air is dry. Let’s look at some typical areas within a process where static electricity is most likely to occur, as well as some simple control measures. #### Electrostatic charges – piping systems As we’ve mentioned, the flow of liquid through piping systems can generate a static charge. However, there are factors which can influence the amount of charge generated. These include the rate of flow and the velocity of the liquid. Control measures include keeping the rate and velocity of the liquid low. This can be achieved by ensuring pipe dimensions are appropriate for the volume of liquid flowing through them; and also ensuring the length of pipe is as short as possible. #### Electrostatic charges – filling operations Filling operations, which involve large flows of liquid and splashing, generate turbulence. This turbulence allows the large amounts of liquid to pass against the vessel surfaces which in turn generates a static charge. If the liquid has already passed through piping to get to the filling operation, this will only serve to increase the accumulated charge already generated. Control measures include: • Ensuring filling operations do not involve the free-fall of liquids. This will reduce the amount of splashing taking place; • Lowering the velocity of the liquid being filled; • Ensure fill pipes touch the bottom of the container being filled; • Tanks which have been filled with products that have a low conductivity, i. e. jet fuels and diesels, should be given time to relax before the process continues; • Tanks which have been filled with product should not have any ullage (vapour space) for a set period of time. Nor should any dipping of the product take place, again for a set period of time. #### Electrostatic charges – filtration By their very nature, filters have large surface areas, and this can generate as much as 200 times the amount of electrostatic charge in a piping system that has a filtration system within it, as compared with the same piping system without filtration. Control measures include ensuring good bonding and grounding is in place (see below). #### Electrostatic charges – other issues • Liquids which have particles within them are more susceptible to the generation of static charge than those without; • Static can be generated when liquids are mixed together; • Piping or vessels which allow a space for vapour to accumulate are a particular concern as any spark generated from a discharge of static electricity may cause an explosion inside the pipe. ### Methods of controlling static charges Although the generation of static electricity cannot be totally eliminated, the rate of generation and its accumulation can be reduced by the following control measures, what you can read below. #### Methods of controlling static charges – additives In some instances, anti-static additives can be introduced to reduce static charge build up. #### Methods of controlling static charges – bonding and grounding Bonding and grounding techniques are a very effective means of minimizing the risk of spark generation from a build-up of static electricity. A bonding system is where all the various pieces of equipment within a process system are connected together. This ensures that they all have the same electrical potential, which means there is no possibility of a discharge of electricity, by way of a spark, from one piece of equipment to another. Grounding is where pieces of equipment (which may be bonded together or not) are connected to an earthing point. This ensures any electrical charge in the equipment is given the means to constantly flow to earth, thus ensuring there is no potentially dangerous build-up of charge which could lead to a sudden discharge of electricity, by way of a spark. All equipment which is involved in processing or storing flammable liquid, gas or vapour should be bonded and grounded. Some other considerations are: • Incidental objects and equipment, such as probes, thermometers and spray nozzles, which are isolated, but which can become sufficiently charged to cause a static spark, may need special consideration. • The cables used for bonding and grounding cables should be heavy duty cables. This is to ensure that they can cope with physical wear and tear without compromising their grounding ability. It is also to ensure that their electrical resistance is as low as possible. • The bonding of process equipment to conductors must be direct and positive. • Using an inert gas, such as nitrogen, within the ullage space of a storage vessel will prevent an explosion or flash fire occurring if an electrostatic spark does occur. The inert gas lowers the oxygen content of the gas in the ullage space, thus ensuring there is insufficient oxygen to support a burning process (oxygen being part of the fire triangle). • Operators should wear anti-static clothing. ## The identification of ignition sources ### Fire hazards, risks and controls In the oil and gas industry, the severity of any incident involving fire and/or explosion is likely to be very grave, possibly involving loss of life, severe damage or destruction of plant, as well as having a potential impact on local communities. Consequently, any type of fire or explosion is unacceptable and controls must be put in place to prevent such an occurrence. These controls fall into two main categories. First, any product should remain contained or under control throughout the process it is undergoing. In simple terms this means that any leak of product is regarded as highly undesirable. However, if a leak does occur there should be systems in place to detect it immediately and for appropriate action to be taken to control it and/or mitigate any consequences. Second, all sources of ignition should be eradicated as far as possible in areas where product is processed and has the potential to escape. Where it is necessary to introduce an ignition source into such an area, such as maintenance involving hot work, then an appropriate risk assessment should be undertaken to identify and Risk Management Techniques used in the Oil and Gas Industriesevaluate the risks, as well as introducing a permit-to-work regime. These measures may well be accompanied by other appropriate controls, such as temporarily shutting down the process and having fire-fighting equipment to hand. ### Identifying sources of ignition We will now look at potential ignition sources which need to be considered when conducting a risk assessment. Some of the sources of ignition have had basic control measures added. • Smoking and smoking material. • A total ban on smoking and the taking of smoking materials into controlled areas should be enforced. • Vehicles. • Vehicles may be totally prohibited or restricted to only specially adapted vehicles. • Hot work such as welding, grinding, burning, etc. • Implement a permit-to-work regime. • Electrical equipment. • The equipment should be suitable for the zone it is intended to be used in. It should also be properly and regularly inspected and maintained. • Machinery such as generators, compressors, etc. • Hot surfaces such as those heated by process or by local weather (hot deserts). • Heated process equipment such as dryers and furnaces. • Flames such as pilot lights. • Space heating equipment. • Sparks from lights and switches. • Use only electrical equipment and instrumentation classified for the zone in which it is located. • Impact sparks. • Stray current from electrical equipment. • Ensure all equipment is bonded and earthed. • Electrostatic discharge sparks. • Bond and ground all plant and equipment. • Make the correct selection of equipment to avoid high intensity electromagnetic radiation sources, e. g. limitation on the power input to fibre optic systems, avoidance of high intensity lasers or sources of infrared radiation. • Lightning. • We have covered the control measures for lightning earlier in this section. There should be measures in place which reduce the potential of a lightning strike, as well as a grounding system to disperse any charge that may affect the installation. A further consideration is to look at weather windows (i. e. to not work during electrical storms). Other control measures include: • Control of maintenance activities that may cause sparks or flames through a permit-to-work system. • Precautions to control the risk from pyrophoric scale. This is where a substance can ignite spontaneously in air, particularly humid air, and is usually associated with formation of ferrous sulphide. • Where control and/or detection equipment is regarded as critical, such as smoke and flame detectors, then a back-up or secondary system may be considered appropriate. All of these control measures are supplementary to the main control and fire-fighting systems such as emergency shutdown systems, fire deluge systems, sprinkler systems, etc. ## Zoning/hazardous area classification and selection of suitable ignition-protected electrical and mechanical equipment and critical control equipment ### Introduction Gases and vapours can create explosive atmospheres. Consequently, areas where these potentially hazardous airborne substances present themselves are classed as hazardous areas so that appropriate controls can be implemented. However, how often these substances present themselves is also a factor in determining the appropriate level of control. For example, if the presence of a flammable vapour only happens once every three months, it would not be sensible to apply the same level of control to an area where a flammable vapour is present all day, every day. Read also: Safety Critical Equipment Controls in Oil and Gas Industry The answer is to apply a classification to areas – called zoning – which places appropriate controls on the type of equipment that can be used in that area and which potentially can create a source of ignition, particularly electrical equipment, which reflect the risk involved. This zoning is determined by the frequency and extent of explosive atmospheres being present over a fixed period of time and the likelihood of an explosive atmosphere occurring at the same time as an ignition source becomes active. All of these parameters are established through a rigorous risk assessment. ### Zoning A place where an explosive atmosphere may occur on a basis frequent enough to be regarded as requiring special precautions to reduce the risk of a fire or explosion to an acceptable level is called a “hazardous place”. A place where an explosive atmosphere is not expected to occur on a basis frequent enough to be regarded as requiring special precautions is called a “non-hazardous place”. Under these circumstances, “special precautions” means applying measures to control sources of ignition within an area designated as a hazardous place. Determining which areas are hazardous places, and to what extent, is called a “hazardous area classification study”. A hazardous area classification study is a method of analysing the extent and frequency to which an area is subject to having an explosive atmosphere. The main purpose of this is to facilitate the appropriate selection and installation of apparatus, tools and equipment which can be used safely within the environment, even if an explosive atmosphere is present. A hazardous area classification study involves giving due consideration to the following: • The flammable materials that may be present; • The physical properties and characteristics of each of the flammable materials; • The source of potential releases and how they can form explosive atmospheres; • Prevailing operating temperatures and pressures; • Presence, degree and availability of ventilation (forced and natural); • Dispersion of released vapours to below flammable limits; • The probability of each release scenario. Consideration of these factors will enable the appropriate selection of zone classification for each area regarded as hazardous, as well as the geographical extent of each zone. The results of this work should be documented in hazardous area classification data sheets. These sheets should be supported by appropriate reference drawings which will show the extent of the zones around various plant items. Hazardous areas are classified into zones based on an assessment of two factors: 1. The frequency of the occurrence of an explosive gas atmosphere; 2. The duration of an explosive gas atmosphere. These two factors in combination will then facilitate the decision-making process which will determine which zone will apply to the area under consideration. • Zone 0: An area in which an explosive gas atmosphere is present continuously or for long periods of time; • Zone 1: An area in which an explosive gas atmosphere is likely to occur in normal operation; • Zone 2: An area in which an explosive gas atmosphere is not likely to occur in normal operation but, if it does occur, will only exist for a short period of time. As the zone definitions only take into account the frequency and duration of explosive atmospheres being present, and not the consequences of an explosion, it may be deemed necessary, because of the severe consequences of any explosion, to upgrade any equipment specified for use within that area to a higher level. This will be a discretionary option open to the analysis team. ### Selection of equipment As we inferred earlier in this section, the whole idea of zoning is to determine what apparatus, tools and equipment may be installed or used in a particular zone. The issue with electrical equipment is that it normally creates sparks, either as a result of the brushes coming in contact within the rotating armature, or when a switch is activated. Either event can ignite any flammable gas present in the atmosphere in the vicinity of the equipment. Consequently, manufacturers have designed specialized equipment which overcomes, in various ways, the issue of having sparks which are exposed to the local atmosphere. The particular solution which is incorporated into each piece of equipment is signified by a code which is marked on the equipment’s product identification label. For example, “d” signifies equipment which has the motor and switch enclosed in a flameproof enclosure, or “q” powder filled. Both pieces of equipment are safe to use in zones 1 and 2, as indicated in Table 1 below. Table 1. Tools and equipment categorization in zoned areas Zone 0Zone 1Zone 2 An area in which an explosive gas atmosphere is present continuously or for long periods of timeAn area in which an explosive gas atmosphere is likely to occur in normal operationAn area in which an explosive gas atmosphere is not likely to occur in normal operation but, if it does occur, will only exist for a short period of time. Category 1 equipmentCategory 2 equipmentCategory 3 equipment Note: Although this equipment is categorized for use in Zone 0, it can also be used in Zones 1 and 2Note: Although this equipment is categorized for use in Zone 1, it can also be used in Zone 2Note: This equipment can only be used in Zone 2 “ia” – Intrisically safe Ex s – Special protection if specifically certified for Zone 0 “d” – Flameproof enclosure “p” – Pressurized “q” – Powder filled “o” – Oil immersion “e” – Intrinsically safe “ib” – Intrinsically safe “m” – Encapsulated “s” – Special protection Electrical type “n” The guide to what equipment is appropriate for each zone is the ATEX equipment directive and, whilst this is not a legal requirement outside the EU, most of the electrical standards have been developed over many years and are now set at international level. Apparatus, tools and equipment are categorized in accordance with their ability to meet the standards required when used within each zone, as shown in Table 1. As well as taking into account the sparks that electrical equipment can generate, consideration also needs to be given to the potential surface temperature of all equipment, not just electrical equipment, although most electrical equipment does generate heat as a matter of course. In order to facilitate this, temperatures have been categorized into six classes: T1-T6. The bigger the T-number, the lower the allowable temperature of any equipment used. The temperature class will be determined by the auto ignition temperature of the substance involved (see Table 2). Table 2. Temperature classification for tools and equipment in zoned areas Temperature classificationMaximum surface temperatureSubstances can be used which will not auto ignite at temperatures below T1450 °C450 °C T2300 °C300 °C T3200 °C200 °C T4135 °C135 °C T5100 °C100 °C T685 °C85 °C The T-number of each piece of equipment will also be marked on the equipment’s product identification label. If several different flammable materials may be present within a particular area, the material that gives the highest classification will dictate the overall area classification of any equipment used. Question 1: Electrostatic charges are a problem in hydrocarbon process systems in that they can cause sparks with the potential to create an explosion. Identify THREE factors which can influence the generation of electrostatic charges. The command word in this question is identify. This requires an answer which selects and names a subject or issue. Your answer should include THREE of the following suggested answers: Factors which can influence the generation of static electricity include the following: • The conductivity of the liquid; • The amount of turbulence in the liquid; • The amount of surface area contact, between the liquid and other surfaces; • The velocity of the liquid; • The presence of impurities in the liquid; • The atmospheric conditions. Static build-up is enhanced when the air is dry. Question 2: Electrostatic charges are a problem in hydrocarbon process systems in that they can cause sparks with the potential to create an explosion. Give THREE measures that can be used to reduce the generation of electrostatic charges. The command word in this question is give. This requires an answer without explanation. Your answer should include THREE of the following suggested answers: Measures that can be used to reduce the generation of electrostatic charges include: • Ensure that filling operations do not involve the free-fall of liquids; • Lower the velocity of the liquid being filled; • Ensure fill pipes touch the bottom of the container being filled; • Tanks which have been filled with products that have a low conductivity, i. e. jet fuels and diesels, should be given time to relax before the process continues; • Tanks which have been filled with product should not have any ullage (vapour space) for a set period of time. Nor should any dipping of the product take place, again for a set period of time. Question 3: With regard to hazardous areas, explain why hazardous areas are categorized into different zones. The command word in this question is explain. This requires an answer which gives a clear account of, or reasons for, a subject or issue. Your answer should expand on the following information: Gases and vapours can create explosive atmospheres. Consequently, areas where these potentially hazardous airborne substances present themselves are classed as hazardous areas so that appropriate controls can be implemented. However, the frequency with which these substances present themselves is also a factor in determining the appropriate level of control. For example, if the presence of a flammable vapour only happens once every three months, it would not be sensible to apply the same level of control as to an area where a flammable vapour is present all day, every day. The answer is to apply a classification to areas – called zoning – which places appropriate controls on the type of equipment that can be used in that area and which potentially can create a source of ignition, particularly electrical equipment, which reflects the risk involved. Question 4: With regard to hazardous areas, explain why equipment should be categorized for use in different zones. The command word in this question is explain. This requires an answer which gives a clear account of, or reasons for, a subject or issue. Your answer should expand on the following information: The issue with electrical equipment is that it normally creates sparks, either as part of the brushes coming in contact within the rotating armature, or when a switch is activated. Either event can ignite any flammable gas present in the atmosphere in the vicinity of the equipment. Consequently, manufacturers have designed specialized equipment which overcomes, in various ways, the issue of having sparks which are exposed to the local atmosphere. The particular solution which is incorporated into each piece of equipment is signified by a code which is marked on the equipment’s product identification label. Footnotes Did you find mistake? Highlight and press CTRL+Enter Сентябрь, 13, 2022 158 0 Notes Text copied Favorite articles • Список избранных статей пуст. Here will store all articles, what you marked as "Favorite". Articles store in cookies, so don't remove it. $${}$$
2022-09-30 12:06:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34327778220176697, "perplexity": 1646.887267141332}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00354.warc.gz"}
https://math.stackexchange.com/questions/986248/is-there-an-infinite-countable-sigma-algebra-on-an-uncountable-set
# Is there an infinite countable $\sigma$-algebra on an uncountable set Let $\Omega$ be a set. If $\Omega$ is finite, then any $\sigma$-algebra on $\Omega$ is finite. If $\Omega$ is infinite and countable, a $\sigma$-algebra on $\Omega$ cannot be infinite and countable. What about if $\Omega$ is not countable ? Is it possible to find an uncountable $\Omega$ with a $\sigma$-algebra that is infinite and countable ? • Isn't $\{\emptyset,\Omega\}$ a finite $\sigma$-algebra for arbitrary $\Omega$? – Hagen von Eitzen Oct 22 '14 at 18:14 • @HagenvonEitzen I edited my question accordingly. I'm looking for infinite countable $\sigma$-algebras. – Gabriel Romon Oct 22 '14 at 18:16 Suppose that $\lvert \Omega\rvert\ge\aleph_0$, and $\mathscr M\subset\mathscr P(\Omega)$ is a $\sigma-$algebra. We shall show that: $$\textit{Either}\,\,\,\, \lvert\mathscr M\rvert<\aleph_0\quad or\quad \lvert\mathscr M\rvert\ge 2^{\aleph_0}.$$ Define in $\Omega$ the following relation: $$a\sim b\qquad\text{iff}\qquad \forall E\in\mathscr M\, (\,a\in E\Longleftrightarrow b\in E\,).$$ Clearly, "$\sim$" is an equivalence relation in $\Omega$, and every $E\in\mathscr M$ is a union of equivalence classes. Also, for every two different classes $[a]$ and $[b]$, there are $E,F\in\mathscr M$, with $E\cap F=\varnothing$, such that $[a]\subset E$ and $[b]\subset F$. Case I. If there are finitely many classes, say $n$, then each class belongs to $\mathscr M$, and clearly $\lvert \mathscr M\rvert=2^n$. Case II. Assume there are $\aleph_0$ classes. Fix a class $[a]$, and let $\{[a_n]:n\in\mathbb N\}$ be the remaining classes. For every $n\in\mathbb N$, there exist $E_n,F_n\mathscr\in M$, such that $[a]\subset E_n$, $[a_n]\subset F_n$ and $E_n\cap F_n=\varnothing$. Clearly, $[a]=\bigcap_{n\in\mathbb N} E_n\in\mathscr M$, and thus $\lvert \mathscr M\rvert=2^{\aleph_0}$. Case III. If there are uncountably many classes, we can pick infinite countable of them $[a_n]$, $n\in\mathbb N$, and disjoint sets $E_n\in\mathscr M$, with $[a_n]\subset E_n$, (using the Axiom of Choice), and then realise that the $\sigma-$algebra generated by the $E_n$'s has the cardinality of the continuum and is a subalgebra of $\mathscr M$. • The use of "some $E$" is very confusing here. – Asaf Karagila Oct 22 '14 at 18:37 • I have fixed it a little bit. – Yiorgos S. Smyrlis Oct 22 '14 at 18:42 • Yeah, that's clearer. – Asaf Karagila Oct 22 '14 at 18:42 • Case II, why each class also belongs to $\mathcal{M}$? I don't see it obviously. And case III, how to chose the disjoint sets $E_n$? – Xiang Yu Jan 3 '16 at 7:29 • @XiangYu See be updated answer. – Yiorgos S. Smyrlis Jan 3 '16 at 14:20 No. The same proof is used as in the case where $\Omega$ is countable and uncountable. If $\cal B$ is an infinite $\sigma$-algebra, then $\cal B$ has at least the cardinality of the continuum. The proof is as follows, since $\cal B$ is infinite, it has a countable subset $\{A_i\mid i\in\Bbb N\}$. If this countable set is a $\subseteq$-chain (without loss of generality, it's increasing) take $B_i=A_i\setminus\bigcup_{j<i}A_j$, and this is an infinite family of pairwise disjoint sets. Otherwise, it's not a chain, and without loss of generality doesn't contain an infinite chain either, and by similar a induction (although now it's gonna be slightly dirtier, you might have to skip a few indices, between $B_i$ and $B_{i+1}$) create an infinite family $B_i$ of pairwise disjoint non-empty sets. Now it's easy to show that $\cal B$ has at least the cardinality of the continuum. If $D\subseteq\Bbb N$, take $B_D=\bigcup_{i\in D}B_i$. And it's quite easy to see that $D\neq D'$ implies that $B_D\neq B_{D'}$. $\quad \square$ A word on choice. $\tiny\textsf{(with regards to tomasz)}$ Note that this proof uses the axiom of choice. We used the fact that if $\cal B$ is infinite, then it has a countably infinite subset. In fact it is consistent that there is a $\sigma$-algebra which is infinite, but has no countably infinite subset. Of course this $\sigma$-algebra is not countable either. If we are only interested in the answer to the original question, then the axiom of choice is not used anywhere. If there is a countably infinite subset, then the proof follows (note that all the choices above, except the $A_i$'s, are done by induction on the chosen sequence, so they are in fact AC-free); and if there is no countably infinite subset, then certainly $\cal B$ is not countable! (One example of such $\sigma$-algebra that has no countably infinite subset, is the power set of an amorphous set; where amorphous means that every subset is finite or its complement is finite. Why is this a $\sigma$-algebra? From a countably infinite collection of subsets we can define an infinite co-infinite subset, in a way similar to the above induction from $A_i$ to $B_i$.) • I was just about to post about this. I mean if the OP knows that for a countably infinite set $\Omega$ there are no countably infinite sigma algebras $\Sigma$. Then let $|\Omega|\ge \aleph_1$, and $\Sigma$, a sigma algebra such that $|\Sigma| = \aleph_0$. Let $\omega$ be a countably infinite subset of $\Omega$. Then $\Sigma \restriction \omega$ is a countably infinite sigma algebra on $\omega$, a contradiction to the OP's knowledge... Unless there are some details I'm missing – Rustyn Oct 22 '14 at 18:33 • Rustyn, but how can you assure that there is a countable subset of $\Omega$ that the restriction of $\Sigma$ to it is infinite? (That's a good approach, though, but I'd go for the approach showing that there is some quotient of $\Omega$ which is countable that the $\sigma$-algebra can be pulled to define a $\sigma$-algebra on that, instead.) – Asaf Karagila Oct 22 '14 at 18:36 • Oh @Asaf, Yeah that seems like the correct approach. My bad-- I was just spewing some naive-intuitiony stuff – Rustyn Oct 22 '14 at 18:37 • You're not saying about the fact that we're assuming axiom of choice? I'm disappointed. ;-) – tomasz Oct 22 '14 at 18:58 • @tomasz: I'm a bit sick, and tired. I should be sleeping now. But I've added to the answer, regardless of my condition. – Asaf Karagila Oct 22 '14 at 19:08 Let $\mathcal B$ be an infinite $\sigma$-algebra on a set $\Omega$. Partition $\Omega$ into two disjoint nonempty sets $A_1,B_1\in\mathcal B$. At least one of $\mathcal B\cap\mathcal P(A_1)$ and $\mathcal B\cap\mathcal P(B_1)$ is infinite; otherwise, if $|\mathcal B\cap\mathcal P(A_1)|=m\lt\aleph_0$ and $|\mathcal B\cap\mathcal P(B_1)|=n\lt\aleph_0$, we would have $|\mathcal B|=mn\lt\aleph_0$. We may assume that $\mathcal B\cap\mathcal P(B_1)$ is infinite. Next, partition $B_1$ into two disjoint nonempty sets $A_2,B_2\in\mathcal B$ so that $\mathcal B\cap\mathcal P(B_2)$ is infinite. Continuing in this way, we get an infinite sequence $A_1,A_2,A_3,\dots$ of pairwise disjoint nonempty elements of $\mathcal B$. (Every infinite Boolean algebra contains such a sequence; we haven't used the $\sigma$ yet.) Since $\mathcal B$ is a $\sigma$-algebra, the union of each subsequence belongs to $\mathcal B$, showing that $|\mathcal B|\ge2^{\aleph_0}$. • Yes. That was my original plan, but I couldn't remember why you can keep the induction going. How silly of me. :-) – Asaf Karagila Oct 23 '14 at 1:45
2019-09-15 22:00:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9746590256690979, "perplexity": 156.2534909737174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572436.52/warc/CC-MAIN-20190915215643-20190916001643-00372.warc.gz"}
http://inperc.com/wiki/index.php?title=Simplicial_complex
This site contains: mathematics courses and book; covers: image analysis, data analysis, and discrete modelling; provides: image analysis software. Created and run by Peter Saveliev. Simplicial complex Just like cubical complexes a simplicial complex is a special kind of cell complex. But instead of cubes/squares, it's made of triangles, tetrahedra, ..., simplices: Since they have fewer edge, simplices are indeed simpler then anything else, even cubes: You can also compare the boundary operators. Just like cubical complexes come from digital imaging, simplicial complexes sometimes come from a related source: scanning. A laser scanner send a beam at the objects in a sequence of pulses. Each redout produces a point in the 3d space: Eventually scanning produces a "point cloud" which later is turned into a complex by adding edges, then faces, etc. How? A simplistic approach is to choose a threshold $r>0$ and then • add an edge between any two points if they are within $r$ from each other, • add a face spanning three points if the diameter of the triangle is less then $r$, etc. More sophisticated approaches are: Vietoris-Rips complex, Čech complex, Delaunay triangulation, etc. Another source of point clouds is data. Let's compare simplicial complexes to others. Cell complexes: Cells are homeomorphic to points, segments, disks, balls, ..., $n$-balls ${\bf B}^n$. Cubical complexes: Cells are vertices, edges, squares, cubes, ..., $n$-cubes ${\bf I}^n$ on a rectangular grid. Simplicial complexes: Cells are homeomorphic to points, segments, triangles, tetrahedra, ..., $n$-simplices. The simplest example of $n$-simplex is the polygon in ${\bf R}^n$ with $n+1$ vertices at $$(0,0,0,0,...,0), (1,0,0,0,...,0), (0,1,0,0,...,0), \ldots , (0,0,0,0,...0,1,0), (0,0,0,0,...,0,1),$$ illustrated here in 3D: More precisely, an $n$-simplex is defined as the convex hull of $n+1$ points $$v_0 v_1 \ldots v_n = {\rm conv}\{v_0, v_1, \ldots, v_n \}$$ in general position, which means: $v_1 - v_0, \ldots, v_n - v_0$ are linearly independent. We need the last restriction because, for example, the convex hull of three points isn't always a triangle (e.g., ${\rm conv}\{(0,0),(1,0),(2,0)\}$). Theorem. The $n$-simplex is homeomorphic to the $n$-ball ${\bf B}^n$. There is however an additional combinatorial structure; a simplex has faces. Example. Suppose $a$ is a $1$-simplex $$a = v_0v_1.$$ Then its faces are $v_0$ and $v_1$. They can be easily described algebraically. An arbitrary point in $a$ is a convex combination of $v_0$ and $v_1$: $$a_0v_0 + a_1v_1 {\rm \hspace{3pt} with \hspace{3pt}} a_0 + a_1 = 1.$$ What about $v_0$ and $v_1$? They are convex combinations too but of a special kind: $$v_0 = a_0v_0 + a_1v_1 {\rm \hspace{3pt} with \hspace{3pt}} a_0 = 1, a_1 = 0,$$ $$v_1 = a_0v_0 + a_1v_1 {\rm \hspace{3pt} with \hspace{3pt}} a_0 = 0, a_1 = 1.$$ For notation we write • $\sigma < \tau$ if $\sigma$ is a face of $\tau$. Similar ideas apply to higher dimensions. Example. Suppose $\tau$ is a $2$-simplex $$\tau = v_0v_1v_2.$$ An arbitrary point in $\tau$ is a convex combination of $v_0,v_1,v_2$: $$a_0v_0 + a_1v_1 + a_2v_2 {\rm \hspace{3pt} with \hspace{3pt}} a_0 + a_1 + a_2 = 1.$$ To find all $1$-faces set one of these coefficient equal to $0$: $$a = a_0v_0 + a_1v_1 + a_2v_2 {\rm \hspace{3pt} with \hspace{3pt}} a_0 + a_1 + a_2 = 1 {\rm \hspace{3pt} and \hspace{3pt}} a_2 = 0,$$ $$b = a_0v_0 + a_1v_1 + a_2v_2 {\rm \hspace{3pt} with \hspace{3pt}} a_0 + a_1 + a_2 = 1 {\rm \hspace{3pt} and \hspace{3pt}} a_1 = 0,$$ $$c = a_0v_0 + a_1v_1 + a_2v_2 {\rm \hspace{3pt} with \hspace{3pt}} a_0 + a_1 + a_2 = 1 {\rm \hspace{3pt} and \hspace{3pt}} a_0 = 0.$$ So, $$a,b,c < \tau.$$ To find all $0$-faces set two of these coefficient equal to $0$: $$v_0 = 1 \cdot v_0 + 0 \cdot v_1 + 0 \cdot v_2, {\rm \hspace{3pt} etc}.$$ Definition. Given $\tau$ an $n$-simplex $$a = v_0v_1 \ldots v_n,$$ then the convex hull $\sigma$ of any $k+1, k<n$, vertices of $\tau$ is called a $k$-face of $\tau$. Observe that by face we mean a proper face. Now we would like to define simplicial complexes as cell complexes whose cells are homeomorphic to simplices... It wouldn't make sense though to leave it this way because cells of any cell complex are homeomorphic to balls homeomorphic to simplices. We want the faces of simplices to be nicely attached to each other. The problem with the second example here is that $\tau$ is glued to two faces of $\sigma$. Definition. A cell complex $K$ is a simplicial complex if • (1) each of its cells has the structure of a simplex, i.e., there is a homeomorphism to a geometric simplex; • (2) the complex contains all faces of each simplex: $$\tau \in K, \sigma < \tau \Rightarrow \sigma \in K,$$ • (3) two simplices can only share a single face: $$\tau, \sigma \in K \Rightarrow \tau \cap \sigma < \tau.$$ Note: The last condition implies that a simplex can't be attached to itself (remember how the circle is created?). Any cell complex can be turned into a simplicial complex. How? By subdivision, i.e., cutting the cells into smaller cells - triangles - until the above conditions are satisfied. Sidenote: the procedure is reminiscent of the subdivision of intervals in the definition of the Riemann integral via Riemann sums. Example. The simplest cell complex representation of the circle -- one $0$-cell and one $1$-cell -- is not a simplicial complex. We add an extra vertex that cuts $a$ in two, but this still isn't a simplicial complex since $c$ and $d$ share two faces. For that we need another subdivision: Representation of a cell complex or a topological space as a simplicial complex is called triangulation. For a given space, its triangulation isn't unique. In particular, any further subdivision of the above simplicial complex will be a triangulation. Example. The familiar representation of the cylinder isn't a triangulation simply because the $2$-cell is a square not triangle. Cutting it in half diagonally doesn't make it a triangulation because a new $2$-cell $\alpha$ is glued to itself. Adding more edges does the job. Exercise. Find a triangulation of the torus. Solution: Exercise. Find a triangulation for each of the main surfaces. Once all the cells are simplices, the triangulation can be found via the so-called barycentric subdivision: every simplex (of any dimensions) gets a new vertex inside and all possible faces are added as well. For the $3$-simplex above, one: 1. keeps the $4$ original vertices, 2. adds $6$ new vertices on each edge, 3. adds $4$ new vertices on each face, and 4. adds $1$ new vertex inside. Then one adds many new edges and faces. Exercise. Find a triangulation for the cube. Simplicial complexes also come in the form of abstract simplicial complexes. In fact, every open cover of a topological space will produce one via the nerve of cover construction. One-dimensional simplicial complexes are called graphs.
2013-05-21 02:28:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6880732178688049, "perplexity": 620.9433764615619}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699675907/warc/CC-MAIN-20130516102115-00071-ip-10-60-113-184.ec2.internal.warc.gz"}
https://testbook.com/question-answer/the-marked-price-of-a-book-is-rs-1000-find-which--5e230e29f60d5d3b87e1be57
# The marked price of a book is Rs. 1000 find which one case is better for the shopkeeper?Case I - 29% discountCase II - A 10% discount and then 20% on the marked priceCase III - A 10% discount and then 10% and again 10% discount Case IV - A 28% discount and then 2% discount 1. Case II 2. Case I 3. Case IV 4. Case III Option 4 : Case III ## Detailed Solution Given: Marked price (M.P) = Rs. 1000 Formula used: Total discount = (x + y - xy /100), where x and y are the discount percent S.P = M.P × (100 - discount %) /100 $${\rm{S}}.{\rm{P}} = {\rm{M}}.{\rm{P}} \times \frac{{\left( {100 - {\rm{\;d}}1} \right)}}{{100}} \times \frac{{\left( {100 - {\rm{d}}2} \right)}}{{100}} \times \frac{{\left( {100 - {\rm{d}}3} \right)}}{{100}},{\rm{\;where\;d}}1,{\rm{\;d}}2,{\rm{\;d}}3{\rm{\;are\;three\;successive\;discount}}$$ Calculation: Case I S.P = 1000 × (100 -29)/100 = Rs. 710 Case II Total discount = (x + y - xy /100) ⇒ Total discount = (10 + 20 - 10 × 20/100) = 28% S.P = 1000 × (100 - 28)/100 = Rs. 720 Case III $${\rm{S}}.{\rm{P}} = 1000 \times \frac{{\left( {100 - 10} \right)}}{{100}} \times \frac{{\left( {100 - 10} \right)}}{{100}} \times \frac{{\left( {100 - 10} \right)}}{{100}} = {\rm{Rs}}.{\rm{\;}}729$$ Case IV Total discount = (x + y - xy /100) ⇒ Total discount = (28 + 2 - 28 × 2/100) = 29.44% S.P = 1000 × (100 - 29.44)/100 Hence, S.P = Rs. 705.6
2021-11-28 21:46:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49181562662124634, "perplexity": 7151.832145856346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358591.95/warc/CC-MAIN-20211128194436-20211128224436-00221.warc.gz"}
https://socratic.org/questions/what-is-the-integral-of-1-ln-lnx
# What is the integral of 1/ln(lnx)? Jun 29, 2015 This is an impossible integral to complete without resorting to calculators and evaluating at explicit bounds. http://www.wolframalpha.com/input/?i=integral+of+1%2F%28ln%28lnx%29%29 Let's see how far we can go, though... $\int {\left(\ln \left(\ln x\right)\right)}^{- 1} \mathrm{dx}$ Let: $u = \ln x$ $\mathrm{du} = \frac{1}{x} \mathrm{dx}$ $x \mathrm{du} = \mathrm{dx} \to {e}^{u} \mathrm{du} = \mathrm{dx}$ $= \int {\left(\ln u\right)}^{- 1} {e}^{u} \mathrm{du}$ $= \int {e}^{u} / \left(\ln u\right) \mathrm{du}$ $s t - \int t \mathrm{ds}$ Let: $s = {e}^{u}$ $\mathrm{ds} = {e}^{u} \mathrm{du}$ $\mathrm{dt} = \frac{1}{\ln} u \mathrm{du}$ t = ? Detour. We need to know the integral for $\frac{1}{\ln} x$. Let: $q = \frac{1}{\ln} u$ $\mathrm{dq} = - \ln \frac{u}{u} \mathrm{du}$ $\mathrm{dr} = \mathrm{du}$ $r = u$ $q r - \int r \mathrm{dq}$ $= \frac{u}{\ln} u + \int \ln u \mathrm{du}$ $= \frac{u}{\ln} u + u \ln | u | - u = t$ Back to $s$ and $t$: $= {e}^{u} \left(\frac{u}{\ln} u + u \ln | u | - u\right) - \int {e}^{u} \left(\frac{u}{\ln} u + u \ln u - u\right) \mathrm{du}$ $= \frac{u {e}^{u}}{\ln} u + u {e}^{u} \ln | u | - u {e}^{u} - \int \frac{u {e}^{u}}{\ln} u \mathrm{du} - \int u {e}^{u} \ln u \mathrm{du} + \int u {e}^{u} \mathrm{du}$ The only integral we can do with real functions or standard functions is $\int u {e}^{u} \mathrm{du}$. I'm running out of variables. $o p - \int p \mathrm{do}$ Let: $o = u$ $\mathrm{do} = \mathrm{du}$ $\mathrm{dp} = {e}^{u} \mathrm{du}$ $p = {e}^{u}$ $u {e}^{u} - \int {e}^{u} \mathrm{du}$ $= u {e}^{u} - {e}^{u}$ So now we get: $= \frac{u {e}^{u}}{\ln} u + u {e}^{u} \ln | u | \cancel{- u {e}^{u} + u {e}^{u}} - {e}^{u} - \int \frac{u {e}^{u}}{\ln} u \mathrm{du} - \int u {e}^{u} \ln u \mathrm{du}$ $= \frac{u {e}^{u}}{\ln} u + u {e}^{u} \ln | u | - {e}^{u} - \int \frac{u {e}^{u}}{\ln} u \mathrm{du} - \int u {e}^{u} \ln u \mathrm{du}$ $= {e}^{u} \left[\frac{u}{\ln} u + u \ln | u | - 1\right] - \int \frac{u {e}^{u}}{\ln} u \mathrm{du} - \int u {e}^{u} \ln u \mathrm{du}$ $= {e}^{\ln x} \left[\frac{\ln x}{\ln} \left(\ln x\right) + \left(\ln x\right) \ln | \ln x | - 1\right] - \int \frac{\left(\ln x\right) {e}^{\ln x}}{x \ln \left(\ln x\right)} \mathrm{dx} - \int \frac{1}{x} \left(\ln x\right) {e}^{\ln x} \ln \left(\ln x\right) \mathrm{dx}$ $= \textcolor{b l u e}{x \left[\frac{\ln x}{\ln} \left(\ln x\right) + \left(\ln x\right) \ln | \ln x | - 1\right] - \int \frac{\ln x}{\ln \left(\ln x\right)} \mathrm{dx} - \int \left(\ln x\right) \ln \left(\ln x\right) \mathrm{dx}}$
2019-05-23 13:18:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 36, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9088518023490906, "perplexity": 1065.5945484669362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257244.16/warc/CC-MAIN-20190523123835-20190523145835-00005.warc.gz"}
https://brilliant.org/discussions/thread/problem-from-imo/
# Problem from IMO ABCD is a rhombus in which the altitude from d bisects AB. AE=EB. Therefore, angle A and angle B respectively are (of how many degrees). 5 years, 6 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: I don't think this is from IMO (International Mathematical Olympiad). - 5 years, 6 months ago This is from an exam conducted here in India by a private company. - 5 years, 6 months ago Oh, I see. Indian Mathematical Olympiad? - 5 years, 6 months ago No way, I doubt they ask such easy questions. As Vikram said, it must be a contest by some private organisation. If the private organisations use names such as "IMO" and mislead the students, then its a very bad tactic to promote themselves. - 5 years, 6 months ago its called international mathematics olympiad conducted by SOF - 5 years, 5 months ago Nope. Its a basic level contest conducted by a company. - 5 years, 6 months ago Hey guys,I think she is talking of SOF IMO and not the Great and the one you all are thinking IMO!!!! - 5 years, 6 months ago YOU,RE RIGHT!. It is a problem from the work book. - 5 years, 6 months ago Consider the sides of the rhombus to be of length $$x$$, i.e., $$AD=x$$. So, $$AE= \frac{x}{2}$$. Let $$\angle A= \theta$$. So, $$cos \theta = \frac{AE}{AD} =\frac{1}{2}$$ $$\implies \theta =60^o$$ $$\implies \angle A=60^o$$ and $$\angle B=180^o-60^o=120^o$$(As they are interior opposite angles) - 5 years, 6 months ago This que is not from IMO ! - 5 years, 6 months ago I think is 180 degrees - 5 years, 6 months ago Consider- $$DA = x$$ $$AE = x/2$$ $$EB = x/2$$ $$DE = x^{2} - (x/2)^{2}$$ $$= \sqrt{3}x/2$$ $$DB = x$$ (Pythagoras Theorem) $$DA = x , DB = x , AB = x$$ $$ADB$$ is an Equilateral triangle Thus, $$A = 60°$$ $$B = 120°$$ - 5 years, 6 months ago THANK YOU!!! - 5 years, 6 months ago ANGLE A=60 ANGLE B=120 - 5 years, 6 months ago If this problem is from IMO, please tell me what year and what question. - 5 years, 6 months ago - 5 years, 6 months ago this problem is not from the main exam, it is from the workbook - 5 years, 6 months ago
2019-04-22 17:10:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.998310387134552, "perplexity": 8817.699299130303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578558125.45/warc/CC-MAIN-20190422155337-20190422180324-00010.warc.gz"}
https://math.stackexchange.com/questions/3853872/the-words-diffusion-model?noredirect=1
# the Words 'Diffusion Model' In ; Stochastic's, SDE, PDE, I have heard the terminology $$\textit{"Diffusion Model" or "Diffusion Equation"}$$. The heat equation is sometimes also called the Diffusion Equation ( since it represents the diffusion of heat over some domain, or since its built from Brownian Motion which describes the random trajectories of a brownian particle ). I assume this is just lazy and the heat equation is just a 'very particular diffusion equation'. Can anyone explain 𝐈𝐧 𝐆𝐞𝐧𝐞𝐫𝐚𝐥 what someone is referring to when they talk about $$\textit{'a Diffusion Equation, or the Diffusion Part of the Equation'}$$. If anyone can explain this from a mathematical or physical perspective that would be amazing. Note see Relationship between the diffusion equation and the heat equation for a closely related discussion. • It's mostly just a naming convention. You get a construction kit for interesting equations in $u_t=cu_x+du_{xx}+f(u)$ where $cu_x$ is a transport term after the transport equation $u_t=cu_x$, $du_{xx}$ the diffusion term for its connection to the Brownian motion and heat equation, as you said, and a local reaction term $f(u)$ for birth-death dynamics, or chemical reaction equations etc. Oct 6 '20 at 14:15 • Historic names.. related to its first application... Oct 6 '20 at 14:42
2022-01-21 01:12:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8607263565063477, "perplexity": 413.1866976217107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302715.38/warc/CC-MAIN-20220121010736-20220121040736-00647.warc.gz"}
https://greprepclub.com/forum/phil-collects-virtual-gold-in-an-online-computer-game-and-th-9709.html
It is currently 17 Nov 2018, 04:59 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Phil collects virtual gold in an online computer game and th Author Message TAGS: GMAT Club Legend Joined: 07 Jun 2014 Posts: 4710 GRE 1: Q167 V156 WE: Business Development (Energy and Utilities) Followers: 91 Kudos [?]: 1612 [1] , given: 375 Phil collects virtual gold in an online computer game and th [#permalink]  18 Jun 2018, 14:59 1 KUDOS Expert's post 00:00 Question Stats: 90% (00:59) correct 9% (00:00) wrong based on 11 sessions Phil collects virtual gold in an online computer game and then sells the virtual gold for real dollars. After playing 10 hours a day for 6 days, he collected 540,000 gold pieces. If he immediately sold this virtual gold at a rate of $1 per 1,000 gold pieces, what were his average earnings per hour, in real dollars? (A)$5 (B) $6 (C)$7 (D) $8 (E)$9 [Reveal] Spoiler: OA _________________ Sandy If you found this post useful, please let me know by pressing the Kudos Button Try our free Online GRE Test Director Joined: 20 Apr 2016 Posts: 743 Followers: 6 Kudos [?]: 500 [0], given: 86 Re: Phil collects virtual gold in an online computer game and th [#permalink]  23 Jun 2018, 10:52 sandy wrote: Phil collects virtual gold in an online computer game and then sells the virtual gold for real dollars. After playing 10 hours a day for 6 days, he collected 540,000 gold pieces. If he immediately sold this virtual gold at a rate of $1 per 1,000 gold pieces, what were his average earnings per hour, in real dollars? (A)$5 (B) $6 (C)$7 (D) $8 (E)$9 Here, Total Hours he spent = 10hr * 6days = 60 hours Total Earning real dollars in 60 hours = $$54* 10^4$$ But 1000 virtual coin = $1 therefore $$54* 10^4$$ virtual coins= $$54* 10^4$$ * $$\frac{1}{1000}$$=$540 real coin Therefore in 60 Hrs he earns = $540 real gold coin so in 1 hour he earns = $$\frac{540}{60}$$ =$9 _________________ If you found this post useful, please let me know by pressing the Kudos Button Target Test Prep Representative Affiliations: Target Test Prep Joined: 09 May 2016 Posts: 161 Location: United States Followers: 4 Kudos [?]: 114 [0], given: 0 Re: Phil collects virtual gold in an online computer game and th [#permalink]  24 Jun 2018, 17:02 Expert's post sandy wrote: Phil collects virtual gold in an online computer game and then sells the virtual gold for real dollars. After playing 10 hours a day for 6 days, he collected 540,000 gold pieces. If he immediately sold this virtual gold at a rate of $1 per 1,000 gold pieces, what were his average earnings per hour, in real dollars? (A)$5 (B) $6 (C)$7 (D) $8 (E)$9 He was playing for 10 x 6 = 60 hours, during which he earned 540,000/1,000 = $540. Thus, his hourly earning is 540/60 =$9. _________________ Jeffery Miller GRE Quant Self-Study Course 500+ lessons 3000+ practice problems 800+ HD solutions Intern Joined: 05 Jan 2018 Posts: 32 Followers: 0 Kudos [?]: 16 [0], given: 8 Re: Phil collects virtual gold in an online computer game and th [#permalink]  25 Jun 2018, 07:30 E...9$GMAT Club Legend Joined: 07 Jun 2014 Posts: 4710 GRE 1: Q167 V156 WE: Business Development (Energy and Utilities) Followers: 91 Kudos [?]: 1612 [0], given: 375 Re: Phil collects virtual gold in an online computer game and th [#permalink] 10 Jul 2018, 05:37 Expert's post Explanation To solve for average earnings, fill in this formula: Total earnings ÷ Total hours = Average earnings per hour Since the gold-dollar exchange rate is$1 per 1,000 gold pieces, Phil’s real dollar earnings for the 6 days were 540,000 ÷ 1,000 = $540. His total time worked was 10 hours per day × 6 days = 60 hours. Therefore, his average hourly earnings were$540 ÷ 60 hours = \$9 per hour. _________________ Sandy If you found this post useful, please let me know by pressing the Kudos Button Try our free Online GRE Test Re: Phil collects virtual gold in an online computer game and th   [#permalink] 10 Jul 2018, 05:37 Display posts from previous: Sort by
2018-11-17 12:59:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22595073282718658, "perplexity": 12322.55625945872}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743521.59/warc/CC-MAIN-20181117123417-20181117145417-00510.warc.gz"}
https://www.gamedev.net/forums/topic/20242-pushpopmatrix/
#### Archived This topic is now archived and is closed to further replies. # push/popmatrix() ## Recommended Posts Hi... I''ve got 2 questions: 1) where from does pushmatrix() know, what matrix it''s supposed to push onto the stack? the following to lines? so would this push glRotatef() and glTranslatef() onto the stack right? glpushmatrix(); glRotatef(*whatever*); //<-------\ glTranslatef(*whatever*);//<-----| glpopmatrix(); // takes of these-/ ? 2) is it able to put glRotatef() and glTranslatef() between glBegin() and glEnd() ? thx. cya, Phil ##### Share on other sites 1 - glPushMatrix pushes the matrix you are currently operating on onto a stack, I assume in this case the modelview matrix. glPopMatrix destroys the current matrix by loading the last matrix pushed onto the stack. You would have to place a polygon for the rotate and translate to operate on in between the push and pop. In you pseudocode nothing would happen. 2 - No. I wanrned you! Didn't I warn you?! That colored chalk was forged by Lucifer himself! ##### Share on other sites ok, thx, i knew, that it wouldn''t do anything without a polygon in between them, i just wanted to see if that would use the gltranslatef() and glrotatef() as matrix data...ok, it does... thx. 2) hmmm. then, if i only draw 1 vertice between glBegin(GL_TRIANGLE) and glEnd(), then apply my rotations and transformations, the glBegin(GL_TRIANGLE) draw 1 vertices glEnd(), apply transformations and rotations again, and then draw the next vertex... would this draw my triangle or would i just draw nothing, because it looses the data from the first glBegin(GL_TRIANGLE) ? i need this, because i''m trying to animate a model with bone animations and i want to save the joints, which cannot be saved to display lists (since they change every frame), saved in an own display list for each frame... then i can draw the model for each frame, by drawing the display list for the joints for that frame and applying the meshes in the display lists by first rotating and translating to the right position... sounds kinda complicated, but if it works it would save memory and calculation time... cya, Phil ##### Share on other sites What you should do is create some type of heirarchal structure to store your model in. Each node would hold a display list build from the segment (such as fore arm, upper arm, shin, thigh, etc) and the rotation and position of the joint (relative to the parent joint). So you still get to gain speed by using display lists, but also can rotate joins and stuff. When you draw, just start with the top node (probably the head if you are doing a humanoid), then apply a glRotatef() for the rotation of the neck joint, then draw the torso, then do a glRotatef() before drawing the upper arm, etc. Am I explaining this well enough? If you need more explanation, I could write up some psuedo code. Morgan ##### Share on other sites This is a list of valid commands that can appear between glBegin and glEnd in opengl 1.2 glVertex glColor glIndex glNormal glTexCoord glMultiTexCoord glEdgeFlag glMaterial glArrayElement glEvalCoord glEvalPoint glCallList glCallLists I think opengl wants you to define a shape in totallity before you begin transforming it. I get all sorts of errors if I screw up a glbegin/end pair. I usually forget the end part. I think you have to multiply the transformation matrix by the location of the points of your joints and then use the transformed coordinates to draw your triangles or quads. I have never done this so don''t quote me. ##### Share on other sites ok, first thx for your replies again... Morgan> I''ve got an heirarchal structure. (i get it from the milkshape3d modeler. i wrote a plugin for my own file format) The problem with that is, that not all face''s vertices have to belong to the same bone. (the connection faces between the meshes). so what i was planning to do, is to make display lists from the faces that belong to one bone and rotate those and the with the rest, i''d build display list for each frame... but, since i can''t rotate between the glBegin() and glEnd() (thx to GKW for the info) i can''t draw the faces correct... so what i''m thinking of now, is just to rotate and transform all of the vertices that are left, to the right position without glRotate() (Yes, by "hand" ;-) and glTranslate() and then save the results in display lists... actually, does it really pay of to use displaylist for such small parts? and friend (and the lead programmer of my programming group says it doesn''t)... i kinda in between.. what''s your opinion? thx, Phil ##### Share on other sites I doubt that you would get a huge speed increase with small display lists but if it is used alot who knows. I don''t think you should be using display lists for what you are doing. Making a new display list every frame sounds like a problem in the making. I bet the best way to go is the vertex array. You will be able to alter a vertex when needed and the you can draw every frame via glArrayElement. In the red book it suggests using display lists for objects that do not change shape and need to be displayed in different places. Like wheels on a car. Make the wheel once then call the display list for each rotation and transfomation for each wheel. If you are using c using the arrayelement is a good way to go. I am using java so the lack of pointers is a problem in this case but once I get my vrml parser working I will get to find out for myself as using the array is the method I plan to use. Like I said though this is my first time doing bone animation so if someone else out there knows a better way I would also like to hear about it. I wanrned you! Didn't I warn you?! That colored chalk was forged by Lucifer himself! ##### Share on other sites 2 votes for the non-displaylist version... ;-) also wanted to say, by "frame" i meant each frame of the animation, sorry bout that.... cya around, Phil ##### Share on other sites Correct me if I''m wrong, but somewhere I heard that Q3 creates a display list for eah frame in its characters animations, I would prefer a skeleton system, but maybe this would be something you would consider, it might solve your problem. Morgan ##### Share on other sites If you are going to use a display list for each frame of animation the displaylist are the way to go. I thought you were going for a dynamic model, which is what I will be diong soon. You might just end up fiddling with the locations for your vertex when you initialize the model. You should be able to figure out a system for moving the vertex and then compiling a display list for each frame of animation. Sorry about the confusion. For what you are doing I would use displaylists, so change my vote. If you know the location of each vertex in relation to the joint it is attached to then you should be able to easily figure out the location of each vertex after you rotate and translate the joints of your skeleton. I wanrned you! Didn't I warn you?! That colored chalk was forged by Lucifer himself! ##### Share on other sites ok... resume: i''ll make for each frame of the animation 1 displaylist. right? this means, that i takes more RAM, but i wouldn''t have to rotate and transform and loadidentity() that much at runtime.... ok, thx a lot for all of your help... cya around, Phil ##### Share on other sites All you will have to do is orient the model in your world with translate''s and rotate''s and then call the display list of the proper frame of animation. Good luck. I wanrned you! Didn't I warn you?! That colored chalk was forged by Lucifer himself! ##### Share on other sites One thing to note, once you have built the display list, you can get rid of your vertex info (saves RAM that way). Morgan • ### Forum Statistics • Total Topics 628378 • Total Posts 2982334 • 10 • 9 • 15 • 24 • 11
2017-11-23 20:57:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3019922971725464, "perplexity": 2018.6811377145013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806939.98/warc/CC-MAIN-20171123195711-20171123215711-00107.warc.gz"}
https://backus.home.blog/
# A PDE-analytic proof of the fundamental theorem of algebra The fundamental theorem of algebra is one of the most important theorems in mathematics, being core to algebraic geometry and complex analysis. Unraveling the definitions, it says: Fundamental theorem of algebra. Let $f$ be a polynomial over $\mathbf C$ of degree $d$. Then the equation $f(z) = 0$ has $d$ solutions $z$, counting multiplicity. Famously, most proofs of the fundamental theorem of algebra are complex-analytic in nature. Indeed, complex analysis is the natural arena for such a theorem to be proven. One has to use the fact that $\mathbf R$ is a real closed field, but since there are lots of real closed fields, one usually defines $\mathbf R$ in a fundamentally analytic way and then proves the intermediate value theorem, which shows that $\mathbf R$ is a real closed field. One can then proceed by tricky algebraic arguments (using, e.g. Galois or Sylow theory), or appeal to a high-powered theorem of complex analysis. Since the fundamental theorem is really a theorem about algebraic geometry, and complex analysis sits somewhere between algebraic geometry and PDE analysis in the landscape of mathematics (and we need some kind of analysis to get the job done; purely algebro-geometric methods will not be able to distinguish $\mathbf R$ from another field $K$ such that $-1$ does not have a square root in $K$) it makes a lot of sense to use complex analysis. But, since complex analysis sits between algebraic geometry and PDE analysis, why not abandon all pretense of respectability (that is to say, algebra — analysis is not a field worthy of the respect of a refined mathematician) and give a PDE-analytic proof? Of course, this proof will end up “looking like” multiple complex-analytic proofs, and indeed it is basically the proof by Liouville’s theorem dressed up in a trenchcoat (and in fact, gives Liouville’s theorem, and probably some other complex-analytic results, as a byproduct). In a certain sense — effectiveness — this proof is strictly inferior to the proof by the argument principle, and in another certain sense — respectability — this proof is strictly inferior to algebraic proofs. However, it does have the advantage of being easy to teach to people working in very applied fields, since it entirely only uses the machinery of PDE analysis, rather than fancy results such as Liouville’s theorem or the Galois correspondence. The proof By induction, it suffices to prove that if $f$ is a polynomial with no zeroes, then $f$ is constant. So suppose that $f$ has no zeroes, and introduce $g(z) = 1/f(z)$. As usual, we want to show that $g$ is constant. Since $f$ is a polynomial, it does not decay at infinity, so $g(\infty)$ is finite. Therefore $g$ can instead be viewed as a function on the sphere, $g: S^2 \to \mathbf C$, by stereographic projection. Also by stereographic projection, one can cover the sphere by two copies of $\mathbf R^2$, one centered at the south pole that misses only the north pole, and one centered at the north pole that only misses the south pole. Thus one can define the Laplacian, $\Delta = \partial_x^2 + \partial_y^2$, in each of these coordinates; it remains well-defined on the overlaps of the charts, so $\Delta$ is well-defined on all of $S^2$. (In fancy terminology, which may help people who already know ten different proofs of the fundamental theorem of algebra but will not enlighten anyone else, we view $S^2$ as a Riemannian manifold under the pushforward metric obtained by stereographic projection, and consider the Laplace-Beltrami operator of $S^2$.) Recall that a function $u$ is called harmonic provided that $\Delta u = 0$. We claim that $g$ is harmonic. The easiest way to see this is to factor $\Delta = 4\partial\overline \partial$ where $2\partial = \partial_x - i\partial_y$. Then $\overline \partial u = 0$ exactly if $u$ has a complex derivative, by the Cauchy-Riemann equations. There are other ways to see this, too, such as using the mean-value property of harmonic functions and computing the antiderivative of $g$. In any case, the proof is just calculus. So $g$ is a harmonic function on the compact connected manifold $S^2$; by the extreme value theorem, $g$ has (or more precisely, its real and imaginary parts have) a maximum. By the maximum principle of harmonic functions (which is really just the second derivative test — being harmonic generalizes the notion of having zero second derivative), it follows that $g$ is equal to its maximum, so is constant. (In fancy terminology, we view $g$ as the canonical representative of the zeroth de Rham cohomology class of $S^2$ using the Hodge theorem.) # Let’s Read: Sendov’s conjecture in high degree, part 4: details of case one In this proof we (finally!) finish the proof of case one. As usual, we throughout fix a nonstandard natural ${n}$ and a complex polynomial of degree ${n}$ whose zeroes are all in ${\overline{D(0, 1)}}$. We assume that ${a}$ is a zero of ${f}$ whose standard part is ${1}$, and assume that ${f}$ has no critical points in ${\overline{D(a, 1)}}$. Let ${\lambda}$ be a random zero of ${f}$ and ${\zeta}$ a random critical point. Under these circumstances, ${\lambda^{(\infty)}}$ is uniformly distributed on ${\partial D(0, 1)}$ and ${\zeta^{(\infty)}}$ is almost surely zero. In particular, $\displaystyle \mathbf E \log\frac{1}{|\lambda|}, \mathbf E \log |\zeta - a| = O(n^{-1})$ and ${\zeta}$ is infinitesimal in probability, hence infinitesimal in distribution. Let ${\mu}$ be the expected value of ${\zeta}$ (thus also of ${\lambda}$) and ${\sigma^2}$ its variance. I think we won’t need the nonstandard-exponential bound ${\varepsilon_0^n}$ this time, as its purpose was fulfilled last time. Last time we reduced the proof of case one to a sequence of lemmata. We now prove them. 1. Preliminary bounds Lemma 1 Let ${K \subseteq \mathbf C}$ be a compact set. Then $\displaystyle f(z) - f(0), ~f'(z) = O((|z| + o(1))^n)$ uniformly for ${z \in K}$. Proof: It suffices to prove this for a compact exhaustion, and thus it suffices to assume $\displaystyle K = \overline{D(0, R)}.$ By underspill, it suffices to show that for every standard ${\varepsilon > 0}$ we have $\displaystyle |f(z) - f(0)|, ~|f'(z)| \leq C(|z| + \varepsilon)^n.$ We first give the proof for ${f'}$. First suppose that ${\varepsilon < |z| \leq R}$. Since ${\zeta}$ is infinitesimal in distribution, $\displaystyle \mathbf E \log |z - \zeta| \leq \mathbf E \log \max(|z - \zeta|, \varepsilon/2) \leq \log \max(|z|, \varepsilon/2) + o(1);$ here we need the ${\varepsilon/2}$ and the ${R}$ since ${\log |z - \zeta|}$ is not a bounded continuous function of ${\zeta}$. Since ${\varepsilon < |z|}$ we have $\displaystyle \mathbf E \log |z - \zeta| \leq \log |z| + o(1)$ but we know that $\displaystyle -\frac{\log n}{n - 1} - \frac{1}{n - 1} \log |f'(z)| = U_\zeta(z) = -\mathbf E \log |z - \zeta|$ so, solving for ${\log |f'(z)|}$, we get $\displaystyle \log |f'(z)| \leq (n - 1) \log |z| + o(n);$ we absorbed a ${\log n}$ into the ${o(n)}$. That gives $\displaystyle |f'(z)| \leq e^{o(n)} |z|^{n-1}.$ Since ${f'}$ is a polynomial of degee ${n - 1}$ and ${f}$ is monic (so the top coefficient of ${f'}$ is ${n}$) this gives a bound $\displaystyle |f'(z)| \leq e^{o(n)} (|z| + \varepsilon)^{n - 1}$ even for ${|z| \leq \varepsilon}$. Now for ${f}$, we use the bound $\displaystyle |f(z) - f(0)| \leq \max_{|w| < |z|} |f'(w)|$ to transfer the above argument. $\Box$ 2. Uniform convergence of ${\zeta}$ Lemma 2 There is a standard compact set ${S \subseteq \overline{D(0, 1)}}$ and a standard countable set ${T \subseteq \overline{D(0, 1)} \setminus \overline{D(1, 1)}}$ such that $\displaystyle S = (\overline{D(0, 1)} \cap \partial D(1, 1)) \cup T,$ all elements of ${T}$ are isolated in ${S}$, and ${||\zeta - S||_{L^\infty}}$ is infinitesimal. Tao claims $\displaystyle \mathbf P(|\zeta - a| \geq \frac{1}{2m}) = O(n^{-1})$ where ${m}$ is a large standard natural, which makes no sense since the left-hand side should be large (and in particular, have positive standard part). I think this is just a typo though. Proof: Since ${\zeta}$ was assumed far from ${a = 1 - o(1)}$ we have $\displaystyle \zeta \in \overline{D(0, 1)} \setminus D(1, 1 - o(1)).$ We also have $\displaystyle \mathbf E \log |\zeta - a| = O(n^{-1})$ so for every standard natural ${m}$ there is a standard natural ${k_m}$ such that $\displaystyle \mathbf P(\log |\zeta - a| \geq \frac{1}{2m}) \leq \frac{k_m}{n}.$ Multiplying both sides by ${n}$ we see that $\displaystyle \text{card } Z \cap K_m = \text{card } Z \cap \{\zeta_0 \in \overline{D(0, 1)}: \log |\zeta_0 - a| \geq \frac{1}{2m}\} \leq k_m$ where ${Z}$ is the variety of critical points ${f' = 0}$. Let ${T_m}$ be the set of standard parts of zeroes in ${K_m}$; then ${T_m}$ has cardinality ${\leq k_m}$ and so is finite. For every zero ${\zeta_0 \in Z}$, either 1. For every ${m}$, $\displaystyle |\zeta_0 - a| < \exp\left(\frac{1}{2m}\right)$ so the standard part of ${|\zeta_0 - a|}$ is ${1}$, or 2. There is an ${m}$ such that ${d(\zeta_0, T_m)}$ is infinitesimal. So we may set ${T = \bigcup_m T_m}$; then ${T}$ is standard and countable, and does not converge to a point in ${\partial D(1, 1)}$, so ${S}$ is standard and ${||\zeta - S||_{L^\infty}}$ is infinitesimal. I was a little stumped on why ${S}$ is compact; Tao doesn’t prove this. It turns out it’s obvious, I was just too clueless to see it. The construction of ${T}$ forces that for any ${\varepsilon > 0}$, there are only finitely many ${z \in T}$ with ${|z - \partial D(1, 1)| \geq \varepsilon}$, so if ${T}$ clusters anywhere, then it can only cluster on ${\partial D(1, 1)}$. This gives the desired compactness. $\Box$ The above proof is basically just the proof of Ascoli’s compactness theorem adopted to this setting and rephrased to replace the diagonal argument (or 👏 KEEP 👏 PASSING 👏 TO 👏 SUBSEQUENCES 👏) with the choice of a nonstandard natural. I think the point is that, once we have chosen a nontrivial ultrafilter on ${\mathbf N}$, a nonstandard function is the same thing as sequence of functions, and the ultrafilter tells us which subsequences of reals to pass to. 3. Approximating ${f,f'}$ outside of ${S}$ We break up the approximation lemma into multiple parts. Let ${K}$ be a standard compact set which does not meet ${S}$. Given a curve ${\gamma}$ we denote its arc length by ${|\gamma|}$; we always assume that an arc length does exist. A point which stumped me for a humiliatingly long time is the following: Lemma 3 Let ${z, w \in K}$. Then there is a curve ${\gamma}$ from ${z}$ to ${w}$ which misses ${S}$ and satisfies the uniform estimate $\displaystyle |z - w| \sim |\gamma|.$ Proof: We use the decomposition of ${S}$ into the arc $\displaystyle S_0 = \partial D(1, 1) \cap \overline{D(0, 1)}$ and the discrete set ${T}$. We try to set ${\gamma}$ to be the line segment ${[z, w]}$ but there are two things that could go wrong. If ${[z, w]}$ hits a point of ${T}$ we can just perturb it slightly by an error which is negligible compared to ${[z, w]}$. Otherwise we might hit a point of ${S_0}$ in which case we need to go the long way around. However, ${S_0}$ and ${K}$ are compact, so we have a uniform bound $\displaystyle \max(\frac{1}{|z - S_0|}, \frac{1}{|w - S_0|}) = O(1).$ Therefore we can instead consider a curve ${\gamma}$ which goes all the way around ${S_0}$, leaving ${D(0, 1)}$. This curve has length ${O(1)}$ for ${z, w}$ close to ${S_0}$ (and if ${z, w}$ are far from ${S_0}$ we can just perturb a line segment without generating too much error). Using our uniform max bound above we see that this choice of ${\gamma}$ is valid. $\Box$ Recall that the moments ${\mu,\sigma}$ of ${\zeta}$ are infinitesimal. Since ${||\zeta - S||_{L^\infty}}$ is infinitesimal, and ${K}$ is a positive distance from any infinitesimals (since it is standard compact), we have $\displaystyle |z - \zeta|, |z - \mu| \sim 1$ uniformly in ${z}$. Therefore ${f}$ has no critical points near ${K}$ and so ${f''/f'}$ is holomorphic on ${K}$. We first need a version of the fundamental theorem. Lemma 4 Let ${\gamma}$ be a contour in ${K}$ of length ${|\gamma|}$. Then $\displaystyle f'(\gamma(1)) = f'(\gamma(0)) \left(\frac{\gamma(1) - \mu}{\gamma(0) - \mu}\right)^{n - 1} e^{O(n) |\gamma| \sigma^2}.$ Proof: Our bounds on ${|z - \zeta|}$ imply that we can take the Taylor expansion $\displaystyle \frac{1}{z - \zeta} = \frac{1}{z - \mu} + \frac{\zeta - \mu}{(z - \mu)^2} + O(|\zeta - \mu|^2)$ of ${\zeta}$ in terms of ${\mu}$, which is uniform in ${\zeta}$. Taking expectations preserves the constant term (since it doesn’t depend on ${\zeta}$), kills the linear term, and replaces the quadratic term with a ${\sigma^2}$, thus $\displaystyle s_\zeta(z) = \frac{1}{z - \mu} + O(\sigma^2).$ At the start of this series we showed $\displaystyle f'(\gamma(1)) = f'(\gamma(0)) \exp\left((n-1)\int_\gamma s_\zeta(z) ~dz\right).$ Plugging in the Taylor expansion of ${s_\zeta}$ we get $\displaystyle f'(\gamma(1)) = f'(\gamma(0)) \exp\left((n-1)\int_\gamma \frac{dz}{z - \zeta}\right) e^{O(n) |\gamma| \sigma^2}.$ Simplifying the integral we get $\displaystyle \exp\left((n-1)\int_\gamma \frac{dz}{z - \zeta}\right) = \left(\frac{\gamma(1) - \mu}{\gamma(0) - \mu}\right)^{n - 1}$ whence the claim. $\Box$ Lemma 5 Uniformly for ${z,w \in K}$ one has $\displaystyle f'(w) = (1 + O(n|z - w|\sigma^2 e^{o(n|z - w|)})) \frac{(w - \mu)^{n-1}}{(z - \mu)^{n - 1}}f'(z).$ Proof: Applying the previous two lemmata we get $\displaystyle f'(w) = e^{O(n|z - w|\sigma^2)} \frac{(w - \mu)^{n-1}}{(z - \mu)^{n - 1}}f'(z).$ It remains to simplify $\displaystyle e^{O(n|z - w|\sigma^2)} = 1 + O(n|z - w|\sigma^2 e^{o(n|z - w|)}).$ Taylor expanding ${\exp}$ and using the self-similarity of the Taylor expansion we get $\displaystyle e^z = 1 + O(|z| e^{|z|})$ which gives that bound. $\Box$ Lemma 6 Let ${\varepsilon > 0}$. Then $\displaystyle f(z) = f(0) + \frac{1 + O(\sigma^2)}{n} f'(z) (z - \mu) + O((\varepsilon + o(1))^n).$ uniformly in ${z \in K}$. Proof: We may assume that ${\varepsilon}$ is small enough depending on ${K}$, since the constant in the big-${O}$ notation can depend on ${K}$ as well, and ${\varepsilon}$ only appears next to implied constants. Now given ${z}$ we can find ${\gamma}$ from ${z}$ to ${\partial B(0, \varepsilon)}$ which is always moving at a speed which is uniformly bounded from below and always moving in a direction towards the origin. Indeed, we can take ${\gamma}$ to be a line segment which has been perturbed to miss the discrete set ${T}$, and possibly arced to miss ${S_0}$ (say if ${z}$ is far from ${D(0, 1)}$). By compactness of ${K}$ we can choose the bounds on ${\gamma}$ to be not just uniform in time but also in space (i.e. in ${K}$), and besides that ${\gamma}$ is a curve through a compact set ${K'}$ which misses ${S}$. Indeed, one can take ${K'}$ to be a closed ball containing ${K}$, and then cut out small holes in ${K'}$ around ${T}$ and ${S_0}$, whose radii are bounded below since ${K}$ is compact. Since the moments of ${\zeta}$ are infinitesimal one has $\displaystyle \int_\gamma (w - \mu)^{n-1} ~dw = \frac{(z - \mu)^n}{n} - \frac{\varepsilon^n e^{in\theta}}{n} = \frac{(z - \mu)^n}{n} - O((\varepsilon + o(1))^n).$ Here we used ${\varepsilon < 1}$ to enforce $\displaystyle \varepsilon^n/n = O(\varepsilon^n).$ By the previous lemma, $\displaystyle f'(w) = (1 + O(n|z - w|\sigma^2 e^{o(n|z - w|)})) \frac{(w - \mu)^{n-1}}{(z - \mu)^{n - 1}}f'(z).$ Integrating this result along ${\gamma}$ we get $\displaystyle f(\gamma(0)) = f(\gamma(1)) - \frac{f'(\gamma(0))}{(\gamma(0) - \mu)^{n-1}} \left(\int_\gamma (w - \mu)^{n-1} ~dw + O\left(n\sigma^2 \int_\gamma|\gamma(0) - w| e^{o(n|\gamma(0) - w|)}|w - \mu|^{n-1}~dw \right) \right).$ Applying our preliminary bound, the previous paragraph, and the fact that ${|\gamma(1)| = \varepsilon}$, thus $\displaystyle f(\gamma(1)) = f(0) + O((\varepsilon + o(1))^n),$ we get $\displaystyle f(z) = f(0) + O((\varepsilon + o(1))^n) - \frac{f'(z)}{(z - \mu)^{n-1}} \left(\frac{(z - \mu)^n}{n} - O((\varepsilon + o(1))^n) + O\left(n\sigma^2 \int_\gamma|z - w| e^{o(n|z - w|)}|w - \mu|^{n-1}~dw \right)\right).$ We treat the first term first: $\displaystyle \frac{f'(z)}{(z - \mu)^{n-1}} \frac{(z - \mu)^n}{n} = \frac{1}{n} f'(z) (z - \mu).$ For the second term, ${z \in K}$ while ${\mu^{(\infty)} \in K}$, so ${|z - \mu|}$ is bounded from below, whence $\displaystyle \frac{f'(z)}{(z - \mu)^{n-1}} O((\varepsilon + o(1))^n) = O((\varepsilon + o(1))^n).$ Thus we simplify $\displaystyle f(z) = f(0) + O((\varepsilon + o(1))^n) + \frac{1}{n} f'(z) (z - \mu) + \frac{f'(z)}{(z - \mu)^{n-1}} O\left(n\sigma^2 \int_\gamma|z - w| e^{o(n|z - w|)}|w - \mu|^{n-1}~dw \right).$ It will be convenient to instead write this as $\displaystyle f(z) = f(0) + O((\varepsilon + o(1))^n) + \frac{1}{n} f'(z) (z - \mu) + O\left(n|f'(z)|\sigma^2 \int_\gamma|z - w| e^{o(n|z - w|)} \left|\frac{w - \mu}{z - \mu}\right|^{n-1}~dw \right).$ Now we deal with the pesky integral. Since ${\gamma}$ is moving towards ${\partial B(0, \varepsilon)}$ at a speed which is bounded from below uniformly in “spacetime” (that is, ${K \times [0, 1]}$), there is a standard ${c > 0}$ such that if ${w = \gamma(t)}$ then $\displaystyle |w - \mu| \leq |z - \mu| - ct$ since ${\gamma}$ is going towards ${\mu}$. (Tao’s argument puzzles me a bit here because he claims that the real inner product ${\langle z - w, z\rangle}$ is uniformly bounded from below in spacetime, which seems impossible if ${w = z}$. I agree with its conclusion though.) Exponentiating both sides we get $\displaystyle \left|\frac{w - \mu}{z - \mu}\right|^{n-1} = O(e^{-nct})$ which bounds $\displaystyle f(z) = f(0) + O((\varepsilon + o(1))^n) + \frac{1}{n} f'(z) (z - \mu) + O\left(n|f'(z)|\sigma^2 \int_0^1 te^{-(c-o(1))nt} ~dt\right).$ Since ${c}$ is standard, it dominates the infinitesimal ${o(1)}$, so after shrinking ${c}$ a little we get a new bound $\displaystyle f(z) = f(0) + O((\varepsilon + o(1))^n) + \frac{1}{n} f'(z) (z - \mu) + O\left(n|f'(z)|\sigma^2 \int_0^1 te^{-cnt} ~dt\right).$ Since ${n\int_0^1 te^{-cnt} ~dt}$ is exponentially small in ${n}$, in particular it is smaller than ${O(n^{-1})}$. Plugging in everything we get the claim. $\Box$ 4. Control on zeroes away from ${S}$ After the gargantuan previous section, we can now show the “approximate level set” property that we discussed last time. Lemma 7 Let ${K}$ be a standard compact set which misses ${S}$ and ${\varepsilon > 0}$ standard. Then for every zero ${\lambda_0 \in K}$ of ${f}$, $\displaystyle U_\zeta(\lambda) = \frac{1}{n} \log \frac{1}{|f(0)|} + O(n^{-1}\sigma^2 + (\varepsilon + o(1))^n).$ Last time we showed that this implies $\displaystyle U_\zeta(\lambda_0) = U_\zeta(a) + O(n^{-1}\sigma^2 + (\varepsilon + o(1))^n).$ Thus all the zeroes of ${f}$ either live in ${S}$ or a neighborhood of a level set of ${U_\zeta}$. Proof: Plugging in ${z = \lambda_0}$ in the approximation $\displaystyle f(z) = f(0) + \frac{1 + O(\sigma^2)}{n} f'(z) (z - \mu) + O((\varepsilon + o(1))^n)$ we get $\displaystyle f(0) + \frac{1 + O(\sigma^2)}{n} f'(\lambda_0) (\lambda_0 - \mu) = O((\varepsilon + o(1))^n).$ Several posts ago, we proved ${|f(0)| \sim 1}$ as a consequence of Grace’s theorem, so ${f(0)O((\varepsilon + o(1))^n) = O((\varepsilon + o(1))^n)}$. In particular, if we solve for ${f'(\lambda_0)}$ we get $\displaystyle \frac{|f'(\lambda_0)}{n} |\lambda_0 - \mu| = |f(0)| (1 + O(\sigma^2 + (\varepsilon + o(1))^n).$ Using $\displaystyle U_\zeta(z) = -\frac{\log n}{n - 1} - \frac{1}{n - 1} \log |f'(z)|,$ plugging in ${z = \lambda_0}$, and taking logarithms, we get $\displaystyle -\frac{n - 1}{n} U_\zeta(\lambda_0) + \frac{1}{n} \log | \lambda_0 - \mu| = \frac{1}{n} \log |f(0)| + O(n^{-1}\sigma^2 + (\varepsilon + o(1))^n).$ Now ${\lambda_0 \in K}$ and ${K}$ misses the standard compact set ${S}$, so since ${0 \in S}$ we have $\displaystyle |\lambda - \zeta|, |\lambda - \mu| \sim 1$ (since ${\zeta^{(\infty)} \in S}$ and ${\mu}$ is infinitesimal). So we can Taylor expand in ${\zeta}$ about ${\mu}$: $\displaystyle \log |\lambda_0 - \zeta| = \log |\lambda_0 - \mu| - \text{Re }\frac{\zeta - \mu}{\lambda_0 - \mu} + O(\sigma^2).$ Taking expectations and using ${\mathbf E \zeta - \mu}$, $\displaystyle -U_\zeta(\lambda_0) = \log |\lambda_0 - \mu| + O(\sigma^2).$ Plugging in ${\log |\lambda_0 - \mu|}$ we see the claim. $\Box$ I’m not sure who originally came up with the idea to reason like this; I think Tao credits M. J. Miller. Whoever it was had an interesting idea, I think: ${f = 0}$ is a level set of ${f}$, but one that a priori doesn’t tell us much about ${f'}$. We have just replaced it with a level set of ${U_\zeta}$, a function that is explicitly closely related to ${f'}$, but at the price of an error term. 5. Fine control We finish this series. If you want, you can let ${\varepsilon > 0}$ be a standard real. I think, however, that it will be easier to think of ${\varepsilon}$ as “infinitesimal, but not as infinitesimal as the term of the form o(1)”. In other words, ${1/n}$ is smaller than any positive element of the ordered field ${\mathbf R(\varepsilon)}$; briefly, ${1/n}$ is infinitesimal with respect to ${\mathbf R(\varepsilon)}$. We still reserve ${o(1)}$ to mean an infinitesimal with respect to ${\mathbf R(\varepsilon)}$. Now ${\varepsilon^n = o(1)}$ by underspill, since this is already true if ${\varepsilon}$ is standard and ${0 < \varepsilon < 1}$. Underspill can also be used to transfer facts at scale ${\varepsilon}$ to scale ${1/n}$. I think you can formalize this notion of “iterated infinitesimals” by taking an iterated ultrapower of ${\mathbf R}$ in the theory of ordered rings. Let us first bound ${\log |a|}$. Recall that ${|a| \leq 1}$ so ${\log |a| \leq 0}$ but in fact we can get a sharper bound. Since ${T}$ is discrete we can get ${e^{-i\theta}}$ arbitrarily close to whatever we want, say ${-1}$ or ${i}$. This will give us bounds on ${1 - a}$ when we take the Taylor expansion $\displaystyle \log|a| = -(1 - a)(1 + o(1)).$ Lemma 8 Let ${e^{i\theta} \in \partial D(0, 1) \setminus S}$ be standard. Then $\displaystyle \log |a| \leq \text{Re } ((1 - e^{-i\theta} + o(1))\mu) - O(|\mu|^2 + \sigma^2 + (\varepsilon + o(1))^n).$ Proof: Let ${K}$ be a standard compact set which misses ${S}$ and ${\lambda_0 \in K}$ a zero of ${f}$. Since ${\zeta \notin K}$ (since ${S}$ is close to ${\zeta}$) and ${|a-\zeta|}$ has positive standard part (since ${d(a, S) = 1}$) we can take Taylor expansions $\displaystyle -\log |\lambda_0 - \zeta| = -\log |\lambda_0| + \text{Re } \frac{\zeta}{\lambda_0} + O(|\zeta|^2)$ and $\displaystyle -\log |a - \zeta| = -\log|a| + \text{Re } \frac{\zeta}{a} + O(|\zeta|^2)$ in ${\zeta}$ about ${0}$. Taking expectations we have $\displaystyle U_\zeta(\lambda_0) = -\log |\lambda_0| + \text{Re } \frac{\mu}{\lambda_0} + O(\mathbf E |\zeta|^2)$ and similarly for ${a}$. Thus $\displaystyle -\log |a| + \text{Re } \frac{\mu}{a} = -\log |\lambda_0| + \text{Re } \frac{\mu}{\lambda_0} + O(\mathbf E |\zeta|^2 + n^{-1}\sigma^2 + (\varepsilon + o(1))^n)$ since $\displaystyle U_\zeta(\lambda_0) - U_\zeta(a) = O(n^{-1}\sigma^2 + (\varepsilon + o(1))^n).$ Since $\displaystyle \mathbf E|\zeta|^2 = |\mu|^2 + \sigma^2$ we have $\displaystyle -\log|\lambda_0| + \text{Re } \left(\frac{1}{\lambda_0} - \frac{1}{a}\right)\mu = -\log|a| + O(|\mu|^2 + \sigma^2 + (\varepsilon + o(1))^n).$ Now ${|\lambda_0| \leq 1}$ so ${-\log |\lambda_0| \geq 0}$, whence $\displaystyle \text{Re } \left(\frac{1}{\lambda_0} - \frac{1}{a}\right)\mu \geq -\log|a| + O(|\mu|^2 + \sigma^2 + (\varepsilon + o(1))^n).$ Now recall that ${\lambda^{(\infty)}}$ is uniformly distributed on ${\partial D(0, 1)}$, so we can choose ${\lambda_0}$ so that $\displaystyle |\lambda_0 - e^{i\theta}| = o(1).$ Thus $\displaystyle \frac{1}{\lambda_0} - \frac{1}{a} = 1 - e^{-i\theta} + o(1)$ which we can plug in to get the claim. $\Box$ Now we prove the first part of the fine control lemma. Lemma 9 One has $\displaystyle \mu, 1 - a = O(\sigma^2 + (\varepsilon + o(1))^n).$ Proof: Let ${\theta_+ \in [0.98\pi, 0.99\pi],\theta_- \in [1.01\pi, 1.02\pi]}$ be standard reals such that ${e^{i\theta_\pm} \notin S}$. I don’t think the constants here actually matter; we just need ${0 < 0.01 < 0.02 < \pi/8}$ or something. Anyways, summing up two copies of the inequality from the previous lemma with ${\theta = \theta_\pm}$ we have $\displaystyle 1.9 \text{Re } \mu \geq \text{Re } ((1 + e^{-i\theta_+} + 1 + e^{-i\theta_-} + o(1))\mu) \geq \log |a| + O(|\mu|^2 + \sigma^2 + (\varepsilon + o(1))^n)$ since $\displaystyle 2 + e^{-i\theta_+} + e^{-i\theta_-} + o(1) \leq 1.9.$ That is, $\displaystyle \text{Re } \mu \geq \frac{\log|a|}{1.9} + O(|\mu|^2 + \sigma^2 + (\varepsilon + o(1))^n).$ Indeed, $\displaystyle -\log |a| = (1 - a)(1 + o(1)),$ so $\displaystyle \text{Re }\mu \geq -\frac{1 - a}{1.9 + o(1)} + O(|\mu|^2 + \sigma^2 + (\varepsilon + o(1))^n).$ If we square the tautology ${|\zeta - a| \geq 1}$ then we get $\displaystyle |\zeta|^2 - 2a \text{Re }\zeta + a^2 \geq 1.$ Taking expected values we get $\displaystyle |\mu|^2 + \sigma^2 - 2a \text{Re }\mu + a^2 \geq 1$ or in other words $\displaystyle \text{Re }\mu \leq -\frac{1 - a^2}{2a} + O(|\mu|^2 + \sigma^2) = -(1 - a)(1 + o(1)) + O(|\mu|^2 + \sigma^2)$ where we used the Taylor expansion $\displaystyle \frac{1 - a^2}{2a} = (1 - a)(1 + o(1))$ obtained by Taylor expanding ${1/a}$ about ${1}$ and applying ${1 - a = o(1)}$. Using $\displaystyle \text{Re }\mu \geq -\frac{1 - a}{1.9 + o(1)} + O(|\mu|^2 + \sigma^2 + (\varepsilon + o(1))^n)$ we get $\displaystyle -\frac{1 - a}{1.9 + o(1)} + O(|\mu|^2 + \sigma^2 + (\varepsilon + o(1))^n) \leq \text{Re }\mu \leq -(1 - a)(1 + o(1)) + O(|\mu|^2 + \sigma^2)$ Thus $\displaystyle (1 - a)\left(1 + \frac{1}{1.9 + o(1)} + o(1)\right) = O(|\mu|^2 + \sigma^2 + (\varepsilon + o(1))^n).$ Dividing both sides by ${1 + \frac{1}{1.9 + o(1)} + o(1) \in [1, 2]}$ we have $\displaystyle 1 - a = O(|\mu|^2 + \sigma^2 + (\varepsilon + o(1))^n).$ In particular $\displaystyle \text{Re }\mu = O(|\mu|^2 + \sigma^2 + (\varepsilon + o(1))^n)(1 + o(1)) + O(|\mu|^2 + \sigma^2) = O(|\mu|^2 + \sigma^2 + (\varepsilon + o(1))^n).$ Now we treat the imaginary part of ${\text{Im } \mu}$. The previous lemma gave $\displaystyle \text{Re } ((1 - e^{-i\theta} + o(1))\mu) - \log |a| = O(|\mu|^2 + \sigma^2 + (\varepsilon + o(1))^n).$ Writing everything in terms of real and imaginary parts we can expand out $\displaystyle \text{Re } ((1 - e^{-i\theta} + o(1))\mu) = (\sin \theta + o(1))\text{Re } \mu + (1 - \cos \theta + o(1))\text{Re }\mu.$ Using the bounds $\displaystyle (1 - \cos \theta + o(1))\text{Re }\mu, ~\log |a| = O(|\mu|^2 + \sigma^2 + (\varepsilon + o(1))^n)$ (Which follow from the previous paragraph and the bound ${\log |a| = O(1 - a)}$), we have $\displaystyle (\sin \theta + o(1))\text{Im } \mu = O(|\mu|^2 + \sigma^2 + (\varepsilon + o(1))^n).$ Since ${T}$ is discrete we can find ${\theta}$ arbitrarily close to ${\pm \pi/2}$ which meets the hypotheses of the above equation. Therefore $\displaystyle \text{Im } \mu = O(|\mu|^2 + \sigma^2 + (\varepsilon + o(1))^n).$ Pkugging everything in, we get $\displaystyle 1 - a \sim \mu = O(|\mu|^2 + \sigma^2 + (\varepsilon + o(1))^n).$ Now ${|\mu|^2 = o(|\mu|)}$ since ${\mu}$ is infinitesimal; therefore we can discard that term. $\Box$ Now we are ready to prove the second part. The point is that we are ready to dispose of the semi-infinitesimal ${\varepsilon}$. Doing so puts a lower bound on ${U_\zeta(a)}$. Lemma 10 Let ${I \subseteq \partial D(0, 1) \setminus S}$ be a standard compact set. Then for every ${e^{i\theta} \in I}$, $\displaystyle U_\zeta(a) - U_\zeta(e^{i\theta}) \geq -o(\sigma^2) - o(1)^n.$ Proof: Since ${\lambda^{(\infty)}}$ is uniformly distributed on ${\partial D(0, 1)}$, there is a zero ${\lambda_0}$ of ${f}$ with ${|\lambda_0 - e^{i\theta}| = o(1)}$. Since ${|\lambda_0| \leq 1}$, we can find an infinitesimal ${\eta}$ such that $\displaystyle \lambda_0 = e^{i\theta}(1 - \eta)$ and ${|1 - \eta| \leq 1}$. In the previous section we proved $\displaystyle U_\zeta(a) - U_\zeta(\lambda_0) = O(n^{-1}\sigma^2) + (\varepsilon + o(1))^n).$ Using ${n^{-1} = o(1)}$ and plugging in ${\lambda_0}$ we have $\displaystyle U_\zeta(a) - U_\zeta(e^{i\theta}(1 - \eta)) = o(\sigma^2) + O((\varepsilon + o(1))^n).$ Now $\displaystyle \text{Re } \eta \int_0^1 \frac{dt}{1 - t\eta + e^{-i\theta}\zeta} = \log |1 - e^{-i\theta}\zeta| - \log|1 - \eta - e^{-i\theta}\zeta| = \log|e^{i\theta} - \zeta| - \log|e^{i\theta} - e^{i\theta}\eta - \zeta|.$ Taking expectations, $\displaystyle \text{Re }\eta \mathbf E\int_0^1 \frac{dt}{1 - t\eta + e^{-i\theta}\zeta} = U_\zeta(e^{i\theta}(1 - \eta)) - U_\zeta(e^{i\theta}).$ Taking a Taylor expansion, $\displaystyle \frac{1}{1 - t\eta - e^{-i\theta}\zeta} = \frac{1}{1 - t\eta} + \frac{e^{-i\theta}\zeta}{(1 - t\eta)^2} + O(|\zeta|^2)$ so by Fubini’s theorem $\displaystyle \mathbf E\int_0^1 \frac{dt}{1 - t\eta + e^{-i\theta}\zeta} = \int_0^1 \left(\frac{1}{1 - t\eta} + \frac{e^{-i\theta}}{(1 - t\eta)^2}\mu + O(|\mu|^2 + \sigma^2)\right)~dt;$ using the previous lemma and ${\eta = o(1)}$ we get $\displaystyle U_\zeta(e^{i\theta}(1 - \eta)) - U_\zeta(e^{i\theta}) = \text{Re }\eta \int_0^1 \frac{dt}{1 - t\eta} + o(\sigma^2) + O((\varepsilon + o(1))^n).$ We also have $\displaystyle \text{Re } \eta \int_0^1 \frac{dt}{1 - t\eta} = -\log \frac{1}{e^{i\theta} - e^{i\theta}\eta} = U_0(1 - \eta)$ since ${0}$ is deterministic (and ${U_0(e^{i\theta} z) = U_0(z)}$, and ${U_0(1) = 0}$; very easy to check!) I think Tao makes a typo here, referring to ${U_i(e^{i\theta}(1 - \eta))}$, which seems irrelevant. We do have $\displaystyle U_0(1 - \eta) = -\log|1 - \eta| \geq 0$ since ${|1 - \eta| \leq 0}$. Plugging in $\displaystyle \text{Re } \eta \int_0^1 \frac{dt}{1 - t\eta} \geq 0$ we get $\displaystyle U_\zeta(e^{i\theta} - e^{i\theta}\eta) - U_\zeta(e^{i\theta}) \geq -o(\sigma^2) - O((\varepsilon + o(1))^n).$ I think Tao makes another typo, dropping the Big O, but anyways, $\displaystyle U_\zeta(a) - U_\zeta(e^{i\theta} - e^{i\theta}\eta) = o(\sigma^2) - O((\varepsilon + o(1))^n)$ so by the triangle inequality $\displaystyle U_\zeta(a) - U_\zeta(e^{i\theta}) \geq -o(\sigma^2) - O((\varepsilon + o(1))^n).$ By underspill, then, we can take ${\varepsilon \rightarrow 0}$. $\Box$ We need a result from complex analysis called Jensen’s formula which I hadn’t heard of before. Theorem 11 (Jensen’s formula) Let ${g: D(0, 1) \rightarrow \mathbf C}$ be a holomorphic function with zeroes ${a_1, \dots, a_n \in D(0, 1)}$ and ${g(0) \neq 0}$. Then $\displaystyle \log |g(0)| = \sum_{j=1}^n \log |a_j| + \frac{1}{2\pi} \int_0^{2\pi} \log |g(e^{i\theta})| ~d\theta.$ In hindsight this is kinda trivial but I never realized it. In fact ${\log |g|}$ is subharmonic and in fact its Laplacian is exactly a linear combination of delta functions at each of the zeroes of ${g}$. If you subtract those away then this is just the mean-value property $\displaystyle \log |g(0)| = \frac{1}{2\pi} \int_0^{2\pi} \log |g(e^{i\theta})| ~d\theta.$ Let us finally prove the final part. In what follows, implied constants are allowed to depend on ${\varphi}$ but not on ${\delta}$. Lemma 12 For any standard ${\varphi \in C^\infty(\partial D(0, 1))}$, $\displaystyle \int_0^{2\pi} \varphi(e^{i\theta}) U_\zeta(e^{i\theta}) ~d\theta = o(\sigma^2) + o(1)^n.$ Besides, $\displaystyle U_\zeta(a) = o(\sigma^2) + o(1)^n.$ Proof: Let ${m}$ be the Haar measure on ${\partial D(0, 1)}$. We first prove this when ${\varphi \geq 0}$. Since ${T}$ is discrete and ${\partial D(0, 1)}$ is compact, for any standard (or semi-infinitesimal) ${\delta > 0}$, there is a standard compact set $\displaystyle I \subseteq \partial D(0, 1) \setminus S$ such that $\displaystyle m(\partial D(0, 1) \setminus I) < \delta.$ By the previous lemma, if ${e^{i\theta} \in I}$ then $\displaystyle \varphi(e^{i\theta}) U_\zeta(a) - \varphi(e^{i\theta}) U_\zeta(e^{i\theta}) \geq -o(\sigma^2) - o(1)^n$ and the same holds when we average in Haar measure: $\displaystyle U_\zeta(a)\int_I \varphi~dm - \int_I \varphi(e^{i\theta}) U_\zeta(e^{i\theta})~dm(e^{i\theta}) \geq -o(\sigma^2) - o(1)^n.$ We have $\displaystyle |\log |e^{i\theta} - \zeta| + \text{Re } e^{-i\theta}\zeta| \leq |\log|3 - \zeta| + 3\text{Re } \zeta| \in L^2(dm(e^{i\theta}))$ so, using the Cauchy-Schwarz inequality, one has $\displaystyle \int_{\partial D(0, 1) \setminus I} \varphi(e^{i\theta}) (\log |e^{i\theta} - \zeta| + \text{Re } e^{-i\theta}\zeta) ~dm(e^{i\theta}) = \sqrt{\int_I |\log |e^{i\theta} - \zeta| + \text{Re } e^{-i\theta}\zeta|} = O(\delta^{1/2}).$ Meanwhile, if ${|\zeta| \leq 1/2}$ then the fact that $\displaystyle \log |e^{i\theta} - \zeta| = \text{Re }-\frac{\zeta}{e^{i\theta}} + O(|\zeta|^2)$ implies $\displaystyle \log |e^{i\theta} - \zeta| + \text{Re } \frac{\zeta}{e^{i\theta}} = O(|\zeta|^2)$ and hence $\displaystyle \int_{\partial D(0, 1) \setminus I} \varphi(e^{i\theta}) (\log |e^{i\theta} - \zeta| + \text{Re } e^{-i\theta}\zeta) ~dm(e^{i\theta}) = O(\delta|\zeta|^2).$ We combine these into the unified estimate $\displaystyle \int_{\partial D(0, 1) \setminus I} \varphi(e^{i\theta}) (\log |e^{i\theta} - \zeta| + \text{Re } e^{-i\theta}\zeta) ~dm(e^{i\theta}) = O(\delta^{1/2}|\zeta|^2)$ valid for all ${|\zeta| \leq 1}$, hence almost surely. Taking expected values we get $\displaystyle \int_{\partial D(0, 1) \setminus I} \varphi(e^{i\theta})U_\zeta(e^{i\theta}) + \varphi(e^{i\theta}) \text{Re }e^{-i\theta}\mu ~dm(e^{i\theta}) = O(\delta^{1/2}(|\mu|^2 + \sigma^2)) + o(\sigma^2) + o(1)^n.$ In the last lemma we bounded ${|\mu|}$ so we can absorb all the terms with ${\mu}$ in them to get $\displaystyle \int_{\partial D(0, 1) \setminus I} \varphi(e^{i\theta})U_\zeta(e^{i\theta}) ~dm(e^{i\theta}) = O(\delta^{1/2}\sigma^2) + o(\sigma^2) + o(1)^n.$ We also have $\displaystyle \int_{\partial D(0, 1) \setminus I} \varphi ~dm = O(\delta)$ (here Tao refers to a mysterious undefined measure ${\sigma}$ but I’m pretty sure he means ${m}$). Putting these integrals together with the integrals over ${I}$, $\displaystyle \ U_\zeta(a)int_{\partial D(0, 1)} \varphi ~dm - \int_{\partial D(0, 1)} \varphi(e^{i\theta}) U_\zeta(e^{i\theta}) ~dm(e^{i\theta}) \geq -O(\delta^{1/2}\sigma^2) - o(\sigma^2) - o(1)^n.$ By underspill we can delete ${\delta}$, thus $\displaystyle U_\zeta(a)\int_{\partial D(0, 1)} \varphi ~dm - \int_{\partial D(0, 1)} \varphi(e^{i\theta}) U_\zeta(e^{i\theta}) ~dm(e^{i\theta}) \geq - o(\sigma^2) - o(1)^n.$ We now consider the specific case ${\varphi = 1}$. Then $\displaystyle U_\zeta(a) - \int_{\partial D(0, 1)} U_\zeta ~dm \geq -o(\sigma^2) - o(1)^n.$ Now Tao claims and doesn’t prove $\displaystyle \int_{\partial D(0, 1)} U_\zeta ~dm = 0.$ To see this, we expand as $\displaystyle \int_{\partial D(0, 1)} U_\zeta ~dm = -\mathbf E \frac{1}{2\pi} \int_0^{2\pi} \log|\zeta - e^{i\theta}| ~d\theta$ using Fubini’s theorem. Now we use Jensen’s formula with ${g(z) = \zeta - z}$, which has a zero exactly at ${\zeta}$. This seems problematic if ${\zeta = 0}$, but we can condition on ${|\zeta| > 0}$. Indeed, if ${\zeta = 0}$ then we have $\displaystyle \int_0^{2\pi} \log|\zeta - e^{i\theta}| ~d\theta = \int_0^{2\pi} \log 1 ~d\theta = 0$ which already gives us what we want. Anyways, if ${|\zeta| > 0}$, then by Jensen’s formula, $\displaystyle \frac{1}{2\pi} \int_0^{2\pi} \log|\zeta - e^{i\theta}| ~d\theta = \log |\zeta| - \log |\zeta| = 0.$ So that’s how it is. Thus we have $\displaystyle -U_\zeta(a) \leq o(\sigma^2) + o(1)^n.$ Since ${|a - \zeta| \geq 1}$, ${\log |a - \zeta| \geq 0}$, so the same is true of its expected value ${-U_\zeta(a)}$. This gives the desired bound $\displaystyle U_\zeta(a) = o(\sigma^2) + o(1)^n.$ We can use that bound to discard ${U_\zeta(a)}$ from the average $\displaystyle U_\zeta(a)\int_{\partial D(0, 1)} \varphi ~dm - \int_{\partial D(0, 1)} \varphi(e^{i\theta}) U_\zeta(e^{i\theta}) ~dm(e^{i\theta}) \geq - o(\sigma^2) - o(1)^n,$ thus $\displaystyle \int_{\partial D(0, 1)} \varphi(e^{i\theta}) U_\zeta(e^{i\theta}) ~dm(e^{i\theta})= o(\sigma^2) + o(1)^n.$ Repeating the Jensen’s formula argument from above we see that we can replace ${\varphi}$ with ${\varphi - k}$ for any ${k \geq 0}$. So this holds even if ${\varphi}$ is not necessarily nonnegative. $\Box$ # Let’s Read: Sendov’s conjecture in high degree, part 3: case zero, and a sketch of case one In this post we’re going to complete the proof of case zero and continue the proof of case one. In the last two posts we managed to prove: Theorem 1 Let ${n}$ be a nonstandard natural, and let ${f}$ be a monic polynomial of degree ${n}$ on ${\mathbf C}$ with all zeroes in ${\overline{D(0, 1)}}$. Suppose that ${a}$ is a zero of ${f}$ such that: 1. Either ${a}$ or ${a - 1}$ is infinitesimal, and 2. ${f}$ has no critical points on ${\overline{D(a, 1)}}$. Let ${\lambda}$ be a random zero of ${f}$ and ${\zeta}$ a random critical point of ${f}$. Let ${\mu}$ be the expected value of ${\lambda}$. Let ${z}$ be a complex number outside some measure zero set and let ${\gamma}$ be a contour that misses the zeroes of ${f}$,${f'}$. Then: 1. ${\zeta \in \overline{D(0, 1)} \setminus \overline{D(a, 1)}}$. 2. ${\mu = \mathbf E \zeta}$. 3. One has $\displaystyle U_\lambda(z) = -\frac{1}{n} \log |f(z)|$ and $\displaystyle U_\zeta(z) = -\frac{\log n}{n - 1} - \frac{1}{n-1} \log |f'(z)|.$ 4. One has $\displaystyle s_\lambda(z) = \frac{1}{n} \frac{f'(z)}{f(z)}$ and $\displaystyle s_\zeta(z) = \frac{1}{n - 1} \frac{f''(z)}{f'(z)}.$ 5. One has $\displaystyle U_\lambda(z) - \frac{n - 1}{n} U_\zeta(z) = \frac{1}{n} \log |s_\lambda(z)|$ and $\displaystyle s_\lambda(z) - \frac{n - 1}{n} s_\zeta(z) = -\frac{1}{n} \frac{s_\lambda'(z)}{s_\lambda(z)}.$ 6. One has $\displaystyle f(\gamma(1)) = f(\gamma(0)) \exp \left(n \int_\gamma s_\lambda(z) ~dz\right)$ and $\displaystyle f'(\gamma(1)) = f(\gamma(0)) \exp\left((n-1) \int_\gamma s_\zeta(z) ~dz\right).$ Moreover, 1. If ${a}$ is infinitesimal (case zero), then ${\lambda^{(\infty)},\zeta^{(\infty)}}$ are identically distributed and almost surely lie in $\displaystyle C = \{e^{i\theta}: 2\theta \in [\pi, 3\pi]\}.$ Moreover, if ${K}$ is any compact set which misses ${C}$, then $\displaystyle \mathbf P(\lambda \in K) = O\left(a + \frac{\log n}{n^{1/3}}\right),$ so ${d(\lambda, C)}$ is infinitesimal in probability. 2. If ${a - 1}$ is infinitesimal (case one), then ${\lambda^{(\infty)}}$ is uniformly distributed on ${\partial D(0, 1)}$ and ${\zeta^{(\infty)}}$ is almost surely zero. Moreover, $\displaystyle \mathbf E \log \frac{1}{|\lambda|}, \mathbf E \log |\zeta - a| = O(n^{-1}).$ We also saw that Sendov’s conjecture in high degree was equivalent to the following result, that we will now prove. Lemma 2 Let ${n}$ be a nonstandard natural, and let ${f}$ be a monic polynomial of degree ${n}$ on ${\mathbf C}$ with all zeroes in ${\overline{D(0, 1)}}$. Let ${a}$ be a zero of ${f}$ such that: 1. Either ${a \log n}$ is infinitesimal (case zero), or 2. There is a standard ${\varepsilon_0 > 0}$ such that $\displaystyle 1 - o(1) \leq a \leq 1 - \varepsilon_0^n$ (case one). If there are no critical points of ${f}$ in ${\overline{D(a, 1)}}$, then ${0 = 1}$. 1. Case zero Now we prove case zero — the easy case — of Lemma 2. Suppose that ${a \log n}$ is infinitesimal. In this case, ${\lambda^{(\infty)}, \zeta^{(\infty)}}$ are identically distributed and almost surely are ${\in C}$. Lemma 3 There are ${0 < r_1 < r_2 < 1/2}$ such that for every ${|z| \in [r_1, r_2]}$, $\displaystyle |s_{\lambda^{(\infty)}}(z)| \sim 1$ uniformly. Proof: Since ${\lambda^{(\infty)}}$ is supported in ${C}$, ${s_{\lambda^{(\infty)}}}$ is holomorphic away from ${C}$. Since ${\lambda^{(\infty)}}$ is bounded, if ${z}$ is near ${\infty}$ then $\displaystyle s_{\lambda^{(\infty)}}(z) = \mathbf E\frac{1}{z - \lambda^{(\infty)}} \sim \mathbf E \frac{1}{z} = \frac{1}{z}$ which is nonzero near ${\infty}$. So the variety ${s_{\lambda^{(\infty)}} = 0}$ is discrete, so there are ${0 < r_1 < r_2 < 1/2}$ such that ${s_{\lambda^{(\infty)}}(re^{i\theta}) \neq 0}$ whenever ${r \in [r_1, r_2]}$. To see this, suppose not; then for every ${r_1 < r_2}$ we can find ${r \in [r_1, r_2]}$ and ${\theta}$ with ${s_{\lambda^{(\infty)}}(re^{i\theta}) = 0}$, so ${s_{\lambda^{(\infty)}}}$ has infinitely many zeroes in the compact set ${\overline{D(0, 1/2)}}$. Since this is definitely not true, the claim follows by continuity of ${s_{\lambda^{(\infty)}}}$. $\Box$ Let ${m}$ be the number of zeroes of ${s_{\lambda^{(\infty)}}}$ in ${D(0, r_1)}$, so ${m}$ is a nonnegative standard natural since ${s_{\lambda^{(\infty)}}}$ is standard and ${\overline{D(0, 1/2)}}$ is compact. Let ${\gamma(\theta) = re^{i\theta}}$ where ${r \in (r_1, r_2)}$; then $\displaystyle \int_\gamma \frac{s_{\lambda^{(\infty)}}'(z)}{s_{\lambda^{(\infty)}}(z)} = m,$ by the argument principle. We claim that in fact ${m \leq -1}$, which contradicts that ${m}$ is nonnegative. This will be proven in the rest of this section. Here something really strange happens in Tao’s paper. He proves this: Lemma 4 One has $\displaystyle \left|\frac{1}{n} \frac{f'(z)}{f(z)} - s_{\lambda^{(\infty)}(z)}\right| = o(1)$ in ${L^1_{loc}}$. We now need to show that the convergence in ${L^1_{loc}}$ above commutes with the use of the argument principle so that $\displaystyle \int_\gamma \frac{(f'/f)'(z)}{f'(z)/f(z)} = m;$ this will be good because we have control on the zeroes and critical points of ${f}$ using our contradiction assumption. What’s curious to me is that Tao seems to substitute this with convergence in ${L^\infty_{loc}}$ on an annulus. Indeed, convergence in ${L^\infty_{loc}}$ does commute with use of the argument principle, but at no point of the proof does it seem like he uses the convergence in ${L^1_{loc}}$. So I include the proof of the latter in the next section as a curiosity item, but I think it can be omitted entirely. Tell me in the comments if I’ve made a mistake here. If ${\chi}$ is a smooth cutoff supported on ${\overline{D(0, 1/2)}}$ and identically one on ${\overline{D(0, r_3)}}$ (where ${r_2 < r_3 < 1/2}$), one has $\displaystyle \frac{1}{n} \frac{f'(z)}{f(z)} = \mathbf E \frac{1}{z - \lambda} = \mathbf E \frac{1 - \chi(\lambda)}{z - \lambda} + \mathbf E \frac{\chi(\lambda)}{z - \lambda}.$ The ${1 - \chi}$ term is easy to deal with, since for every ${z}$, ${(1 - \chi(\lambda))/(z - \lambda)}$ is a bounded continuous function of ${\lambda}$ whenever ${|z| < r_2}$ (so ${|\lambda - z| \geq r_3 - r_2 > 0}$). By the definition of being infinitesimal in distribution we have $\displaystyle \left|\mathbf E\frac{1 - \chi(\lambda)}{z - \lambda} - \frac{1}{z - \lambda^{(\infty)}}\right| = o(1).$ Therefore $\displaystyle \mathbf E \frac{1 - \chi(\lambda)}{z - \lambda} - s_{\lambda^{(\infty)}}(z)$ is uniformly infinitesimal. Now we treat the ${\chi}$ term. Interestingly, this is the main point of the argument where we use that ${a \log n}$ is infinitesimal, and the rest of the argument seems to mainly go through with much weaker assumptions on ${a}$. Lemma 5 There is an ${r \in [r_1, r_2]}$ such that if ${|z| = r}$ then $\displaystyle \left|\mathbf E \frac{\chi(\lambda)}{z - \lambda}\right| = o(1).$ Proof: By the triangle inequality and its reverse, if ${|z| = r}$ then $\displaystyle \left|\mathbf E \frac{\chi(\lambda)}{z - \lambda}\right| \leq \mathbf E \frac{\chi(\lambda)}{|r - |\lambda||}.$ Here ${r \in [r_1, r_2]}$ is to be chosen. Since we have $\displaystyle \mathbf P(\lambda \in K) = O\left(a + \frac{\log n}{n^{1/3}}\right)$ whenever ${K}$ is a compact set which misses ${C}$, this in particular holds when ${K = \overline{B(0, 1/2)}}$. Since ${a = o(1/\log n)}$ and ${n^{-1/3}\log n = o(1/\log n)}$ it follows that $\displaystyle \mathbf P(|\lambda| \leq 1/2) = o(1/\log n).$ In particular, $\displaystyle \mathbf E\chi(\lambda) = o(1/\log n).$ We now claim $\displaystyle \int_{r_1}^{r_2} \frac{dr}{\max(|r - |\lambda||, n^{-10})} = O(\log n).$ By splitting the integrand we first bound $\displaystyle \int_{\substack{[r_1,r_2]\\|r-|\lambda|| \leq n^{-10}}} \frac{dr}{\max(|r - |\lambda||, n^{-10})} \leq 2n^{-10}n^{10} = 2 = O(\log n)$ since ${\log n}$ is nonstandard and the domain of integration has measure at most ${2n^{-10}}$. On the other hand, the other term $\displaystyle \int_{\substack{[r_1,r_2]\\|r-|\lambda|| \geq n^{-10}}} \frac{dr}{\max(|r - |\lambda||, n^{-10})} \leq \int_{n^{-10}}^{r_2 - r_1} \frac{dr}{r} = \log(r_2 - r_1) - 10 \log n = O(\log n)$ since ${\log(r_2 - r_1)}$ is standard while ${\log n}$ is nonstandard. This proves the claim. Putting the above two paragraphs together and using Fubini’s theorem, $\displaystyle \int_{r_1}^{r_2} \mathbf E \frac{\chi(\lambda)}{\max(|r - |\lambda||, n^{-10})} ~dr = \mathbf E\chi(\lambda) \int_{r_1}^{r_2} \frac{1}{\max(|r - |\lambda||, n^{-10})} ~dr = O(\log n) \mathbf E\chi(\lambda)$ is infinitesimal. So outside of a set of infinitesimal measure, ${r \in [r_1, r_2]}$ satisfies $\displaystyle \mathbf E \frac{\chi(\lambda)}{\max(|r - |\lambda||, n^{-10})} = o(1).$ If ${|r - |\lambda|| \leq n^{-10}}$ then there is a (deterministic) zero ${\lambda_0}$ such that ${|r - |\lambda_0|| \leq n^{-10}}$, thus ${r}$ lies in a set of measure ${2n^{-10}}$. There are ${\leq n}$ such sets since there are ${n}$ zeroes of ${f}$, so their union has measure ${2n^{-9}}$, which is infinitesimal. Therefore $\displaystyle \mathbf E \frac{\chi(\lambda)}{|r - |\lambda||} = o(1)$ which implies the claim. $\Box$ Summing up, we have $\displaystyle \frac{f'}{nf} = s_{\lambda^{(\infty)}} + o(1)$ in ${L^\infty(B(0, r))}$, where ${r}$ is as in the previous lemma. Pulling out the factor of ${1/n}$, which is harmless, we can use the argument principle to deduce that ${m}$ is the number of zeroes minus poles of ${f'/f}$; that is, the number of critical points minus zeroes of ${f}$. Indeed, convergence in ${L^\infty}$ does commute with the argument principle, so we can throw out the infinitesimal ${o(1)}$. But ${a}$ is infinitesimal, and we assumed that ${f}$ had no critical points in ${\overline{D(a, 1)}}$, which contains ${D(0, r)}$. So ${f}$ has no critical points, but has a zero ${a}$; therefore ${m \leq -1}$. In a way this part of the proof was very easy: the only tricky bit was using the cutoff to get convergence in ${L^\infty}$ like we needed. The hint that we could use the argument principle was the fact that ${a}$ was infinitesimal, so we had control of the critical points near the origin. 2. Convergence in ${L^1_{loc}}$ Let ${\nu}$ be the distribution of ${\lambda}$ and ${\nu^{(\infty)}}$ of ${\lambda^{(\infty)}}$. Since ${\lambda - \lambda^{(\infty)}}$ is infinitesimal in distribution, ${\nu^{(\infty)} - \nu}$ is infinitesimal in the weak topology of measures; that is, for every continuous function ${g}$ and compact set ${K}$, $\displaystyle \int_K g ~d(\nu^{(\infty)} - \nu) = o(1).$ Now $\displaystyle s_{\lambda^{(\infty)}}(z) - s_\lambda(z) = \int_{D(0, 1)} \frac{d\nu^{(\infty)}(w)}{z - w} - \int_{D(0, 1)} \frac{d\nu(w)}{z - w}.$ If ${K}$ is a compact set and ${\rho}$ is Lebesgue measure then $\displaystyle \int_K s_{\lambda^{(\infty)}} - s_\lambda ~d\rho = \int_K \int_{D(0, 1)} \frac{d(\nu^{(\infty)} - \nu)(w)}{z - w} ~d\rho(z).$ By Tonelli’s theorem $\displaystyle \int_K \int_{D(0, 1)} \frac{d(\nu^{(\infty)} - \nu)(w)}{|z - w|} ~d\rho(z) = \int_{D(0, 1)} \int_K \frac{d\rho(z)}{|z - w|} d(\nu^{(\infty)} - \nu)(w)$ and the inner integral is finite since ${1/|z|}$ is Lebesgue integrable in codimension ${2}$. So the outer integrand is a bounded continuous function, which implies that $\displaystyle \int_K \int_{D(0, 1)} \frac{d(\nu^{(\infty)} - \nu)(w)}{|z - w|} ~d\rho(z) = o(1)$ which gives what we want when we recall $\displaystyle s_\lambda(z) = \frac{1}{n} \frac{f'(z)}{f(z)}$ and we plug in ${s_\lambda}$. 3. Case one: Outlining the proof The proof for case one is much longer, and is motivated by the pseudo-counterexample $\displaystyle f(z) = z^n - 1.$ Here ${a}$ is an ${n}$th root of unity, and ${f}$ has no critical points on ${D(a, 1)}$, but does have ${n - 1}$ critical points at ${0 \in \partial D(a, 1)}$. Similar pseudo-counterexamples hold for $\displaystyle 1 - o(1) \leq a \leq 1 - \varepsilon_0^n$ where ${\varepsilon_0 > 0}$ is standard. We will seek to control these examples by controlling ${\zeta}$ up to an error of size ${O(\sigma^2) + o(1)^n}$; here ${\sigma^2}$ is the variance of ${\zeta}$ and ${o(1)^n}$ is an infinitesimal raised to the power of ${n}$, thus is very small, and forces us to balance out everything in terms of ${a}$. As discussed in the introduction of this post, ${\zeta}$ is infinitesimal in probability (and, in particular, its expected value ${\mu}$ is infinitesimal); thus, with overwhelming probability, the critical points of ${f}$ are all infinitesimals. Combining this with the fact that ${\lambda^{(\infty)}}$ is uniformly distributed on ${\partial D(0, 1)}$, it follows that ${f}$ sort of looks like ${f(z) = z^n - 1}$. Lemma 6 (preliminary bounds) For any standard compact set ${K \subset \mathbf C}$, one has $\displaystyle f(z) = f(0) + O((|z| + o(1))^n)$ and $\displaystyle f'(z) = O((|z| + o(1))^n)$ uniformly in ${z \in K}$. In other words, ${f}$ sort of grows like the translate of a homogeneous polynomial of degree ${n}$. It would be nice if ${\zeta}$ was infinitesimal in ${L^\infty}$, but this isn’t quite true; the following lemma is the best we can do. Lemma 7 (uniform convergence of ${\zeta}$) There is a standard compact set $\displaystyle S = (\overline{D(0, 1)} \cap \partial D(1, 1)) \cup T$ where ${T}$ is countable, standard, does not meet ${\overline{D(1, 1)}}$, and consists of isolated points of ${S}$, such that ${d(\zeta, S)}$ is infinitesimal in ${L^\infty}$. So we think of ${S}$ as some sort of generalization of ${0}$. Away from ${S}$ we have good bounds on ${f,f'}$: Lemma 8 (approximating ${f,f'}$ outside ${S}$) For any standard compact set ${K \subset \mathbf C \setminus S}$: 1. Uniformly in ${z, w \in K}$, $\displaystyle f'(w) = (1 + O(n|z - w|\sigma^2|e^{o(n|z -w|)})) \frac{f'(z)}{(z - \mu)^{n-1}} (w - \mu)^{n-1}.$ 2. For every standard ${\varepsilon > 0}$ and uniformly in ${z \in K}$, $\displaystyle f(z) = f(0) + \frac{1 + O(\sigma^2)}{n} f'(z) (z - \mu) + O((\varepsilon + o(1))^n).$ As a consequence, we can show that every zero of ${f}$ which is far from ${S}$ is close to the level set $\displaystyle U_\zeta = \frac{1}{n} \log\frac{1}{|f(0)|}.$ This in particular holds for ${a}$, since the standard part of ${a}$ is ${1}$, and ${T}$ does not come close to ${1}$ (so neither does ${S}$). In fact the error term is infinitesimal: Lemma 9 (zeroes away from ${S}$) For any standard compact set ${K \subset \mathbf C \setminus S}$, any standard ${\varepsilon > 0}$, and any zero ${\lambda_0 \in K}$, $\displaystyle U_\zeta(\lambda_0) = \frac{1}{n} \log \frac{1}{|f(0)|} + O(n^{-1}\sigma^2) + O((\varepsilon + o(1))^n)$ uniformly in ${\lambda_0}$. Since ${a}$ satisfies the hypotheses of the above lemma, $\displaystyle U_\zeta(\lambda) - U_\zeta(a) = O(n^{-1}\sigma^2 + (\varepsilon + o(1))^n)$ is infinitesimal. This gives us some more bounds: Lemma 10 (fine control) For every standard ${\varepsilon > 0}$: 1. One has $\displaystyle \mu, 1 - a = O(\sigma^2 + (\varepsilon + o(1))^n).$ 2. For every compact set ${I \subseteq \partial D(0, 1) \setminus S}$ and ${e^{i\theta} \in I}$, $\displaystyle U_\zeta(a) - U_\zeta(e^{i\theta}) -o(\sigma^2) - o(1)^n.$ 3. For every standard smooth function ${\varphi: \partial D(0, 1) \rightarrow \mathbf C}$, $\displaystyle \int_0^{2\pi} \varphi(e^{i\theta}) U_\zeta(e^{i\theta}) ~d\theta = o(\sigma^2) + o(1)^n.$ 4. One has $\displaystyle U_\zeta(a) = o(\sigma)^2 + o(1)^n.$ Here Tao claims $\displaystyle \int_0^{2\pi} e^{-2i\theta} \log \frac{1}{|e^{i\theta} - \zeta|} ~d\theta = \frac{\pi}{2}\zeta^2.$ Apparently this follows from Fourier inversion but I don’t see it. In any case if we take the expected value of the left-hand side we get $\displaystyle \int_0^{2\pi} \mathbf Ee^{-2i\theta} \log \frac{1}{|e^{i\theta} - \zeta|} ~d\theta = \int_0^{2\pi} e^{-2i\theta} U_\zeta(e^{i\theta}) = o(\sigma^2) + o(1)^n$ by the fine control lemma, so $\displaystyle \mathbf E \zeta^2 = o(\sigma^2) + o(1)^n.$ In particular this holds for the real part of ${\zeta}$. Since ${\sigma^2}$ is infinitesimal, so are the first two moments of the real part of ${\zeta}$. Since ${|a - \zeta| \in [1, 2]}$, one has $\displaystyle |a - \zeta| - 1 \sim \log |a - \zeta|.$ This is true since for any ${s \in [1, 2]}$ one has ${\log s \sim s - 1}$ (which follows by Taylor expansion). In particular, $\displaystyle |1 - \zeta| \leq |1 - a| + |a - \zeta| = 1 + O((1 - a) + \log |a - \zeta|).$ Let ${\tilde \zeta}$ be the best approximation of ${\zeta}$ on the arc ${\partial D(1, 1) \cap \overline{D(0, 1)}}$, which exists since that arc is compact; then $\displaystyle |\zeta - \tilde \zeta| = O((1 - a) + \log|a - \zeta|).$ Since ${\tilde \zeta \in \partial D(1, 1)}$, it has the useful property that $\displaystyle \text{arg }\zeta \in [\pi/3,\pi/2] \cup [-\pi/2, -\pi/3];$ therefore $\displaystyle \text{Re } \tilde \zeta^2 \leq -\frac{1}{2} |\tilde \zeta|^2.$ Plugging in the expansion for ${\tilde \zeta}$ we have $\displaystyle \text{Re } \zeta^2 \leq -\frac{1}{2} |\zeta|^2 + O(|\zeta|((1-a) + \log|a -\zeta|) + ((1 - a) + \log|a - \zeta|)^2).$ We now use the inequality ${2|zw| \leq |z|^2 + |w|^2}$ several times. First we bound $\displaystyle \frac{1}{2} |\zeta|((1-a) + \log|a -\zeta|) \leq \frac{1}{4}|\zeta|^2 + O(\log^2 |a - \zeta|).$ I had to think a bit about why this is legal; the point is that you can absorb the implied constant on ${\zeta}$ into the implied constant on ${\log |a - \zeta|}$ before applying the inequality. Now we bound $\displaystyle ((1 - a) + \log|a - \zeta|)^2 = (1 - a)^2 + 2(1 - a)\log|a - \zeta| + \log^2 |a - \zeta| = O((1-a)^2 + \log^2 |a - \zeta|)$ by similar reasoning. Thus we conclude the bound $\displaystyle \text{Re } \zeta^2 \leq - \frac{1}{4} |\zeta|^2 + O((1-a)^2 + \log^2 |a - \zeta|),$ or in other words, $\displaystyle \mathbf E \text{Re }\zeta^2 \leq -\frac{1}{4} \mathbf E |\zeta|^2 + O((1-a)^2+ \mathbf E \log^2 |a - \zeta|).$ Applying the fine control lemma, or more precisely the result $\displaystyle 1 - a = O(\sigma^2 + (\varepsilon + o(1))^n),$ as well as the fact that ${1 - a}$ is infinitesimal, we have $\displaystyle (1-a)^2 = (1 - a) O(\sigma^2 + (\varepsilon + o(1))^n) = o(\sigma^2) + o(\varepsilon + o(1))^n)$ for every standard ${\varepsilon > 0}$, hence by underspill $\displaystyle (1-a)^2 = o(\sigma^2) + o(1)^n.$ By the fine control lemma, $\displaystyle U_\zeta(a) = o(\sigma^2) + o(1)^n.$ Thus we bound $\displaystyle \mathbf E \log^2 |a - \zeta| \leq -\mathbf E \log |a - \zeta| = U_\zeta(a) = o(\sigma^2) + o(1)^n$ owing to the fact that ${|a - \zeta| \in [1, 2]}$ so that ${\log |a - \zeta| \in [0, 1]}$. Plugging in the above bounds, $\displaystyle \mathbf E \text{Re }\zeta^2 \leq -\frac{1}{4} \mathbf E|\zeta|^2 + o(\sigma^2) + o(1)$ By definition of variance we have $\displaystyle \mathbf E |\zeta|^2 - |\mu|^2 = \sigma^2$ and ${\mu}$ is infinitesimal so we can spend the ${o(\sigma^2)}$ term as $\displaystyle \mathbf E\text{Re }\zeta^2 \leq -\frac{1+o(1)}{4} \mathbf E |\zeta|^2 + o(1)^n.$ But the fine control lemma said $\displaystyle \mathbf E\text{Re }\zeta^2 = o(\sigma^2) + o(1)^n.$ So $\displaystyle |\mu|^2 + \sigma^2 = o(1)^n.$ In particular, $\displaystyle o(\sigma^2) = o(1)^n$ since ${\mu}$ is infinitesimal. We used underspill to show $\displaystyle (1 - a)^2 = o(\sigma^2) + o(1)^n = o(1)^n$ so $\displaystyle 1 - \varepsilon_0^n \geq a = 1 - o(1)^n > 1 - \varepsilon_0^n$ since ${\varepsilon_0}$ was standard, which implies ${0 = 1}$. Next time, we’ll go back and fill in all the lemmata that we skipped in the proof for case one. This is a tricky bit — pages 25 through 34 of Tao’s paper. (For comparison, we covered pages 19 through 21, some of the exposition in pages 24 through 34, and pages 34 through 36 this time). Next time, then. # Let’s Read: Sendov’s conjecture in high degree, part 2: distribution of random zeroes Before we begin, I want to fuss around with model theory again. Recall that if ${z}$ is a nonstandard complex number, then ${z^{(\infty)}}$ denotes the standard part of ${z}$, if it exists. We previously defined what it meant for a nonstandard random variable to be infinitesimal in distribution. One can define something similar for any metrizable space with a notion of ${0}$, where ${f}$ is infinitesimal provided that ${d(f, 0)}$ is. For example, a nonstandard random variable ${\eta}$ is infinitesimal in ${L^1_{loc}}$ if for every compact set ${K}$ that ${\eta}$ can take values in, ${||\eta||_{L^1(K)}}$ is infinitesimal, since ${L^1_{loc}}$ is metrizable with $\displaystyle d(0, \eta) = \sum_{m=1}^\infty 2^{-m} \frac{||\eta||_{L^1(K_m)}}{1 + ||\eta||_{L^1(K_m)}}$ whenever ${(K_m)}$ is a compact exhaustion. If ${f}$ is nonstandard, ${|f - f^{(\infty)}|}$ is infinitesimal in some metrizable space, and ${f^{(\infty)}}$ is standard, then we call ${f^{(\infty)}}$ the standard part of ${f}$ in ${\mathcal T}$; then the standard part is unique since metrizable spaces are Hausdorff. If the metrizable space is compact, the case that we will mainly be interested in, then the standard part exists. This is a point that we will use again and again. Passing to the cheap perspective, this says that if ${K}$ is a compact metric space and ${(f^{(n)})}$ is a sequence in ${K}$, then there is a ${f^{(\infty)}}$ which is approximates ${f^{(n)}}$ infinitely often, but that’s just the Bolzano-Weierstrass theorem. Last time used Prokohov’s theorem to show that if ${\xi}$ is a nonstandard tight random variable, then ${\xi}$ has a standard part ${\xi^{(\infty)}}$ in distribution. We now restate and prove Proposition 9 from the previous post. Theorem 1 (distribution of random zeroes) Let ${n}$ be a nonstandard natural, ${f}$ a monic polynomial of degree ${n}$ with all zeroes in ${\overline{D(0, 1)}}$, and let ${a \in [0, 1]}$ be a zero of ${f}$. Suppose that ${f'}$ has no zeroes in ${\overline{D(a, 1)}}$. Let ${\lambda}$ be a random zero of ${f}$ and ${\zeta}$ a random zero of ${f'}$. Then: 1. If ${a^{(\infty)} = 0}$ (case zero), then ${\lambda^{(\infty)}}$ and ${\zeta^{(\infty)}}$ are identically distributed and almost surely lie in the curve $\displaystyle C = \{e^{i\theta}: 2\theta \in [\pi, 3\pi]\}.$ In particular, ${d(\lambda, C) = o(1)}$ in probability. Moreover, for every compact set ${K \subseteq \overline{D(0, 1)} \setminus C}$, $\displaystyle \mathbf P(\lambda \in K) = O\left(a + \frac{\log n}{n^{1/3}}\right).$ 2. If ${a^{(\infty)} = 1}$ (case one), then ${\lambda^{(\infty)}}$ is uniformly distributed on the unit circle ${\partial D(0, 1)}$ and ${\zeta^{(\infty)}}$ is almost surely zero. Moreover, $\displaystyle \mathbf E \log \frac{1}{|\lambda|}, \mathbf E\log |\zeta - a| = O(n^{-1}).$ 1. Moment-generating functions and balayage We first show that ${\lambda^{(\infty)}}$ and ${\zeta^{(\infty)}}$ have equal moment-generating functions in a suitable sense. To do this, we first show that they have the same logarithmic potential. Let ${\eta}$ be a random variable such that ${|\eta| = O(1)}$ almost surely (that is, ${\eta}$ is almost surely bounded). Then the logarithmic potential $\displaystyle U_\eta(z) = \mathbf E \log \frac{1}{|z - \eta|}$ is defined almost everywhere as we discussed last time, and is harmonic outside of the essential range of ${\eta}$. Lemma 2 Let ${\eta}$ be a nonstandard, almost surely bounded, random complex number. Then the standard part of ${U_\eta}$ is ${U_{\eta^{(\infty)}}}$ according to the topology of ${L^1_{loc}}$ under Lebesgue measure. Proof: We pass to the cheap perspective. If we instead have a random sequence of ${\eta_j}$ and ${\eta_j \rightarrow \eta}$ in distribution, then ${U_{\eta_j} \rightarrow U_\eta}$ in ${L^1_{loc}}$, since up to a small error in ${L^1_{loc}}$ we can replace ${\log}$ with a test function ${g}$; one then has $\displaystyle \lim_{j \rightarrow \infty} \iint_{K \times \mathbf C} g\left(\frac{1}{|z - w|}\right) ~d\mu_j(w) ~dz = \iint_{K \times \mathbf C} g\left(\frac{1}{|z - w|}\right) ~d\mu(w) ~dz$ where ${\mu_j \rightarrow \mu}$ in the weak topology of measures, ${\mu_j}$ is the distribution of ${\eta_j}$, ${\mu}$ is the distribution of ${\eta}$, and ${K}$ is a compact set equipped with Lebesgue measure. $\Box$ Lemma 3 For every ${1 < |z| \leq 3/2}$, we have $\displaystyle U_\lambda(z) - U_\zeta(z) = O\left(\frac{1}{n} \log \frac{1}{|z| - 1}\right).$ In particular, ${U_{\lambda^{(\infty)}}(z) = U_{\zeta^{(\infty)}}(z)}$. Proof: By definition, ${\lambda \in D(0, 1)}$, so ${z - \lambda \in D(z, 1)}$. Now ${D(z, 1)}$ is a disc with diameter ${T([|z| - 1, |z| + 1])}$ where ${T}$ is a rotation around the origin. Taking reciprocals preserves discs and preserves ${T}$, so ${(z - \lambda)^{-1}}$ sits inside a disc ${W}$ with a diameter ${T[(|z|+1)^{-1}, (|z|-1)^{-1}]}$. Then ${W}$ is convex, so the expected value of ${(z - \lambda)^{-1}}$ is also ${\in W}$. Therefore the Stieltjes transform $\displaystyle s_\lambda(z) = \mathbf E \frac{1}{z - \lambda}$ satisfies ${s_\lambda(z) \in W}$. In particular, $\displaystyle \log |s_\lambda(z)| \in \left[\log \frac{1}{|z| + 1}, \log \frac{1}{|z| - 1}\right].$ But we showed that $\displaystyle U_\lambda(z) - \frac{n - 1}{n} U_\zeta(z) = \frac{1}{n} \log |s_\lambda(z)|$ almost everywhere last time. This implies that for almost every ${z}$, $\displaystyle -\frac{\log(|z| + 1)}{n} \leq U_\lambda(z) - \frac{n - 1}{n}U_\zeta(z) \leq -\frac{\log(|z| - 1)}{n}$ but all terms here are continuous so we can promote this to a statement that holds for every ${z}$. In particular, $\displaystyle U_\lambda(z) - \frac{n - 1}{n} U_\zeta(z) = O\left(\frac{1}{n} \log \frac{1}{|z|-1}\right)$ hence $\displaystyle U_\lambda(z) - U_\zeta(z) = O\left(\frac{1}{n} U_\zeta(z) + \frac{1}{n} \log \frac{1}{|z|-1}\right).$ Since ${|\zeta| < 1}$ while ${1 < |z| < 3/2}$, ${|z - \zeta|}$ is bounded from above and below by a constant times ${|z| - 1}$. Therefore the same holds of its logarithm ${U_\zeta(z)}$, which is bounded from above and below by a constant times ${-\log(|z| - 1)}$. This implies the first claim. To derive the second claim from the first, we use the previous lemma, which implies that we must show that $\displaystyle \log \frac{1}{|z| - 1} = O(n)$ in ${L^1_{loc}}$. But this follows since ${-\log|\cdot|}$ is integrable in two dimensions. $\Box$ Lemma 4 Let ${\eta}$ be an almost surely bounded random variable. Then $\displaystyle U_\eta(Re^{i\theta}) = -\log R + \frac{1}{2} \sum_{m \neq 0} \frac{e^{im\theta}}{|m| R^{|m|}} \mathbf E\eta^{|m|}.$ Proof: One has the Taylor series $\displaystyle \log \frac{1}{|Re^{i\theta} - w|} = -\log R + \frac{1}{2} \sum_{m \neq 0} \frac{e^{im\theta} w^{|m|}}{|m| R^{|m|}}.$ Indeed, by rescaling and using ${\log(ab) = \log a + \log b}$, we may assume ${R = 1}$. The summands expand as $\displaystyle \text{Re }\frac{e^{im\theta} w^{|m|}}{|m| R^{|m|}} = \frac{w^{|m|} \cos |m|\theta}{|m|}$ and the imaginary parts all cancel by symmetry about ${0}$. Using the symmetry about ${0}$ again we get $\displaystyle -\log R + \frac{1}{2} \sum_{m \neq 0} \frac{e^{im\theta} w^{|m|}}{|m| R^{|m|}} = \sum_{m=1}^\infty \frac{w^{|m|} \cos |m|\theta}{|m|}.$ This equals the left-hand side as long as ${|w| < R}$. Taking expectations and commuting the expectation with the sum using Fubini’s theorem (since ${\eta}$ is almost surely bounded), we see the claim. $\Box$ Lemma 5 For all ${m \geq 1}$, one has $\displaystyle \mathbf E\lambda^m - \mathbf E\zeta^m = O\left(\frac{m \log m}{n}\right).$ In particular, ${\lambda^{(\infty)}}$ and ${\zeta^{(\infty)}}$ have identical moments. Proof: If we take ${1 < R \leq 3/2}$ then we conclude that $\displaystyle \sum_{m \neq 0} \frac{e^{im\theta}}{|m| R^{|m|}} \mathbf E\lambda^{|m|} - \sum_{m \neq 0} \frac{e^{im\theta}}{|m| R^{|m|}} \mathbf E\zeta^{|m|} = O\left(\frac{1}{n} \log \frac{1}{R - 1}\right).$ The left-hand side is a Fourier series, and by uniqueness of Fourier series it holds that for every ${m}$, $\displaystyle \frac{e^{im\theta}}{|m| R^{|m|}} \mathbf E(\lambda^{|m|} - \zeta^{|m|}) = O\left(\frac{1}{n} \log \frac{1}{R - 1}\right).$ This gives a bound on the difference of moments $\displaystyle \mathbf E\lambda^m - \mathbf E\zeta^m = O\left(\frac{m R^m}{n} \log \frac{1}{R - 1}\right)$ which is only possible if the moments of ${\lambda^{(\infty)}}$ and ${\zeta^{(\infty)}}$ are identical. The left-hand side doesn’t depend on ${R}$, but if ${m \geq 2}$, ${R = 1 + 1/m}$, then ${R^m \leq 2}$ and ${-\log(R - 1) = \log m}$ so the claim holds. On the other hand, if ${m = 1}$ then this claim still holds, since we showed last time that $\displaystyle \mathbf E\lambda = \mathbf E\zeta$ and obviously ${1 \log 1 = 0}$. $\Box$ Here I was puzzled for a bit. Surely if two random variables have the same moment-generating function then they are identically distributed! But, while we can define the moment-generating function of a random variable as a formal power series ${F}$, it is not true that ${F}$ has to have a positive radius of convergence, in which case the inverse Laplace transform of ${F}$ is ill-defined. Worse, the circle is not simply connected, and in case one, we have to look at a uniform distribution on the circle, whose moments therefore aren’t going to points on the circle, so the moment-generating function doesn’t tell us much. 2. Balayage We recall the definition of the Poisson kernel ${P}$: $\displaystyle P(Re^{i\theta}, re^{i\alpha}) = \sum_{m = -\infty}^\infty \frac{r^{|m|}}{R^{|m|}} e^{im(\theta - \alpha)}$ whenever ${0 < r < R}$ is a radius. Convolving the Poisson kernel against a continuous function ${g}$ on ${\partial B(0, R)}$ solves the Dirichlet problem of ${B(0, R)}$ with boundary data ${g}$. Definition 6 Let ${\eta \in D(0, R)}$ be a random variable. The balayage of ${\eta}$ is $\displaystyle \text{Bal}(\eta)(Re^{i\theta}) = \mathbf EP(Re^{i\theta}, \eta).$ Balayage is a puzzling notion. First, the name refers to a hair-care technique, which is kind of unhelpful. According to Tao, we’re supposed to interpret balayage as follows. If ${w_0 \in B(0, R)}$ is an initial datum for Brownian motion ${w}$, then ${P(Re^{i\theta}, w_0)}$ is the probability density of the first location ${Re^{i\theta}}$ where ${w}$ passes through ${\partial B(0, R)}$. Tao asserts this without proof, but conveniently, this was a problem in my PDE class last semester. The idea is to approximate ${\mathbf R^2}$ by the lattice ${L_\varepsilon = \varepsilon \mathbf Z^2}$, which we view as a graph where each vertex has degree ${4}$, with one edge to each of the vertices directly above, below, left, and right of it. Then the Laplacian on ${\mathbf R^2}$ is approximated by the graph Laplacian on ${L_\varepsilon}$, and Brownian motion is approximated by the discrete-time stochastic process wherein a particle starts at the vertex that best approximates ${w_0}$ and at each stage has a ${1/4}$ chance of moving to each of the vertices adjacent to its current position. So suppose that ${w_0}$ and ${Re^{i\theta}}$ are actually vertices of ${L_\varepsilon}$. The probability density ${P_\varepsilon(Re^{i\theta}, w_0)}$ is harmonic in ${w_0}$ with respect to the graph Laplacian since it is the mean of ${P_\varepsilon(Re^{i\theta}, w)}$ as ${w}$ ranges over the adjacent vertices to ${w_0}$; therefore it remains harmonic as we take ${\varepsilon \rightarrow 0}$. The boundary conditions follow similarly. Now ${\eta}$ if is a random initial datum for Brownian motion which starts in ${D(0, R)}$, the balayage of ${\eta}$ is again a probability density on ${\partial B(0, R)}$ that records where one expects the Brownian motion to escape, but this time the initial datum is also random. I guess the point is that balayage serves as a substitute for the moment-generating function in the event that the latter is just a formal power series. We want to be able to use analytic techniques on the moment-generating function, but we can’t, so we just use balayage instead. Let ${\psi}$ be the balayage of ${\eta}$. Since ${\eta}$ is bounded, we can use Fubini’s theorem to commute the expectation with the sum and see that $\displaystyle \psi(Re^{i\theta}) = \sum_{m-\infty}^\infty R^{-|m|} e^{im\theta} \mathbf E(r^{|m|} e^{-im\alpha}) = 1 + 2\sum_{m=1}^\infty R^{-|m|} \cos m\theta \mathbf E(r^{|m|} \cos m\alpha)$ provided that ${\eta = re^{i\alpha}}$. It will be convenient to rewrite this in the form $\displaystyle \psi(Re^{i\theta}) = 1 + 2\text{Re} \sum_{m=1}^\infty R^{-m}e^{im\theta} \mathbf E\eta^m$ so ${\psi}$ is uniquely determined by the moment-generating function of ${\eta}$. In particular, ${\lambda^{(\infty)}}$ and ${\zeta^{(\infty)}}$ have identical balayage, and one has a bound $\displaystyle \text{Bal}(\lambda)(Re^{i\theta}) - \text{Bal}(\zeta)(Re^{i\theta}) = O\left(\frac{1}{n}\sum_{m=1}^\infty \frac{m \log m}{R^m}\right).$ We claim that $\displaystyle \sum_{m=1}^\infty \frac{m \log m}{R^m} = O\left(-\frac{\log(R-1)}{(R - 1)^2}\right)$ which implies the bound $\displaystyle \text{Bal}(\lambda)(Re^{i\theta}) - \text{Bal}(\zeta)(Re^{i\theta}) = O\left(\frac{1}{n}\frac{\log\frac{1}{R-1}}{(R - 1)^2}\right).$ To see this, we discard the ${m = 1}$ term since ${1 \log 1 = 0}$, which implies that $\displaystyle \sum_{m=1}^\infty \frac{m \log m}{R^m} = \sum_{M=1}^\infty \sum_{m=2^M}^{2^{M+1} - 1} \frac{m \log m}{R^m}.$ Up to a constant factor we may assume that the logarithms are base ${2}$ in which case we get a bound $\displaystyle \sum_{m=1}^\infty \frac{m \log m}{R^m} \leq C\sum_{M=1}^\infty \frac{M2^M}{R^{2^M}}.$ The constant is absolute since ${R \in (1, 3/2]}$. By the integral test, we get a bound $\displaystyle \sum_{M=1-\log(R-1)}^\infty \frac{M2^M}{R^{2^M}} \leq C\int_{-\log(R-1)}^\infty \frac{x2^x}{R^{2^x}} ~dx \leq C\int_{-\log(R-1)}^\infty \frac{2^{x^{(1+\varepsilon)}}}{R^{2^x}} ~dx.$ Using the bound $\displaystyle \int_{1/(R-1)}^\infty \frac{dy}{R^y} \leq CR^{-1/(R-1)} \leq C2^{-1/(R-1)}$ for any ${N}$ and the change of variable ${y = 2^x}$ (thus ${dy = 2^x \log 2 ~dx}$), we get a bound $\displaystyle \sum_{M=1-\log(R-1)}^\infty \frac{M2^M}{R^{2^M}} \leq C \int_{-\log(R-1)} \frac{dy}{R^y} \leq C2^{-1/(R-1)}$ since the ${\varepsilon}$ error in the exponent can’t affect the exponential decay of the integral in ${1/(R-1)}$. Since we certainly have $\displaystyle 2^{-1/(R-1)} \leq C\frac{-\log(R-1)}{(R-1)^2}$ this is a suitable tail bound. To complete the proof of the claim we need to bound the main term. To this end we bound $\displaystyle \sum_{M=1}^{-\log(R-1)} \frac{M2^M}{R^{2^M}} \leq \log\frac{1}{R-1} \sup_{x > 0} \frac{x2^x}{R^{2^x}} = \log\frac{1}{R-1} 2 \uparrow \sup_y \frac{y \log y}{R^y}.$ Here ${\alpha \uparrow \beta = \alpha^\beta}$ denotes exponentiation. Now if ${R - 1}$ is small enough (say ${R - 1 < 3/4}$), this supremum will be attained when ${x > 1}$, thus ${y \log y \leq 2y}$. Therefore $\displaystyle \sum_{M=1}^{-\log(R-1)} \frac{M2^M}{R^{2^M}} \leq \left(2\uparrow \sup_{y > 0} \frac{y}{R^y}\right)^2 \log\frac{1}{R-1} .$ Luckily ${yR^{-y}}$ is easy to differentiate: its critical point is ${1/y = \log R}$. This gives $\displaystyle \sup_{y > 0} \frac{y}{R^y} \leq \log \frac{1}{R - 1}$ so $\displaystyle \left(2\uparrow \sup_{y > 0} \frac{y}{R^y}\right)^2 \leq \frac{1}{(R-1)^2}$ which was the bound we needed, and proves the claim. Maybe there’s an easier way to do this, because Tao says the claim is a trivial consequence of dyadic decomposition. Let’s interpret the bound that we just proved. Well, if the balayage of ${\eta}$ is supposed to describe the point on the circle ${\partial B(0, R)}$ at which a Brownian motion with random initial datum ${\eta}$ escapes, a bound on a difference of two balyages should describe how the trajectories diverge after escaping. In this case, the divergence is infinitesimal, but at different speeds depending on ${R}$. As ${R \rightarrow 1}$, our infinitesimal divergence gains a positive standard part, while if ${R}$ stays close to ${3/2}$, the divergence remains infinitesimal. This makes sense, since if we take a bigger circle we forget more and more about the fact that ${\zeta,\lambda}$ are not the same random variable, since Brownian motion has more time to “forget more stuff” as it just wanders around aimlessly. So in the regime where ${R}$ is close to ${3/2}$, it is reasonable to take standard parts and pass to ${\zeta^{(\infty)}}$ and ${\lambda^{(\infty)}}$, while in the regime where ${R}$ is close to ${1}$ this costs us dearly. 3. Case zero Suppose that ${a}$ is infinitesimal. We showed last time that ${\zeta \in \overline{D(0, 1)} \setminus \overline{D(a, 1)}}$, so ${d(\zeta, C) = O(a)}$ is infinitesimal. Therefore ${\zeta^{(\infty)} \in C}$ almost surely. I think there’s a typo here, because Tao lets ${K}$ range over ${D(0, 1) \setminus C}$ and considers points ${e^{i\theta} \in D(0, 1) \setminus C}$, which don’t exist since ${|e^{i\theta}| = 1}$ while every point in ${D(0, 1)}$ has ${|\cdot| < 1}$. I think this can be fixed by taking closures, which is what I do in the next lemma. Tao proves a “qualitative” claim and then says that by repeating the argument and looking out for constants you can get a “quantitative” version which is what he actually needs. I’m just going to prove the quantitative argument straight-up. The idea is that if ${K}$ is a compact set which misses ${C}$ and ${\lambda \in K}$ then a Brownian motion with initial datum ${\lambda}$ will probably escape through an arc ${J}$ which is close to ${K}$, but ${J}$ is not close to ${C}$ so a Brownian motion which starts at ${\zeta}$ will probably not escape through ${J}$. Therefore ${\lambda,\zeta}$ have very different balayage, even though the difference in their balayage was already shown to be infinitesimal. I guess this shows the true power of balayage: even though the moment-generating function is “just” a formal power series, we know that the essential supports of ${\lambda,\zeta}$ must “look like each other” up to rescaling in radius. This still holds in case one, where one of them is a circle and the other is the center of the circle. Either way, you get the same balayage, since whether you start at some point on a circle or you start in the center of the circle, if you’re a Brownian motion you will exhibit the same long-term behavior. In the following lemmata, let ${K \subset \overline{D(0, 1)} \setminus C}$ be a compact set. The set ${\{\theta \in (-\pi/2, \pi/2): e^{i\theta} \in K\}}$ is compact since it is the preimage of a compact set, so contained a compact interval ${I_K \subseteq (-\pi/2, \pi/2)}$. Lemma 7 One has $\displaystyle \inf_{w \in K} \int_{I_K} P(Re^{i\theta}, w) ~d\theta > 0.$ Proof: Since ${K}$ is compact the minimum is attained. Let ${w}$ be the minimum. Since ${P}$ is a real-valued harmonic function in ${w}$, thus $\displaystyle \Delta \int_{I_K} P(Re^{i\theta}, w) ~d\theta = \int_{I_K} \Delta P(Re^{i\theta}, w) ~d\theta = 0,$ the maximum principle implies that the worst case is when ${K}$ meets ${\partial D(0, R)}$ and ${w \in \partial D(0, R)}$, say ${w = Re^{i\alpha}}$. Then $\displaystyle P(Re^{i\theta}, w) = \sum_{m=-\infty}^\infty e^{im(\theta - \alpha)}.$ Of course this is just a formal power series and doesn’t make much sense. But if instead ${w = re^{i\alpha}}$ where ${r/R}$ is very small depending on a given ${\varepsilon > 0}$, then, after discarding quadratic terms in ${r/R}$, $\displaystyle P(Re^{i\theta}, w) \leq \frac{1 + \varepsilon}{1 - 2(r/R)\cos(\theta - \alpha)}.$ This follows since in general $\displaystyle P(Re^{i\theta}, w) = \frac{1 - (r/R)^2}{1 - 2(r/R) \cos(\theta - \alpha) + (r/R)^2}.$ Now $\displaystyle \int_{I_K^c} \frac{d\theta}{1 - 2(r/R)\cos(\theta - \alpha)} < \pi$ since the integrand is maximized when ${\cos(\theta - \alpha) = 0}$, in which case the integrand evaluates to the measure of ${I_K^c}$, which is ${< \pi}$ since ${I_K^c = (-\pi/2, \pi/2) \setminus I_K}$ and ${I_K}$ has positive measure. Therefore $\displaystyle \int_{I_K^c} P(Re^{i\theta}, w) ~d\theta < \frac{3\pi}{2}.$ On the other hand, for any ${w}$ one has $\displaystyle \int_{-\pi/2}^{\pi/2} P(Re^{i\theta}, w) ~d\theta = 2\pi,$ so this implies gives a lower bound on the integral over ${I_K}$. $\Box$ Lemma 8 If ${1 < R \leq 3/2}$ then $\displaystyle \mathbf P(\lambda \in K) \leq C_K\left(a + R - 1 + \frac{\log \frac{1}{R - 1}}{n(R-1)^2} \right).$ Proof: Let ${w = \lambda}$ in the previous lemma, conditioning on the event ${\lambda \in K}$, to see that $\displaystyle \int_{I_K} P(Re^{i\theta}, \lambda) ~d\theta \geq \delta_K$ where ${\delta_K > 0}$. Taking expectations and dividing by the probability that ${\lambda \in K}$, we can use Fubini’s theorem to deduce $\displaystyle \mathbf P(\lambda \in K) \leq C_K \int_{I_K} \text{Bal}(\lambda)(Re^{i\theta}) ~d\theta$ where ${C_K\delta_K = 1}$. Applying the bound on ${|\text{Bal}(\lambda) - \text{Bal}(\zeta)|}$ from the section on balayage, we deduce $\displaystyle \mathbf P(\lambda \in K) \leq C_K \int_{I_K} \text{Bal}(\zeta)(Re^{i\theta}) ~d\theta + C_K\frac{\log\frac{1}{R-1}}{n(R-1)^2}.$ We already showed that ${d(\zeta, C) = O(a)}$. So in order to show $\displaystyle \int_{I_K} \text{Bal}(\zeta)(Re^{i\theta}) ~d\theta \leq C_K(a + R - 1),$ which was the bound that we wanted, it suffices to show that for every ${re^{i\alpha}}$ such that ${d(re^{i\alpha}, C) = O(a)}$, $\displaystyle \int_{I_K} P(Re^{i\theta}, re^{i\alpha}) ~d\theta \leq C_K(a + R - 1).$ Tao says that “one can show” this claim, but I wasn’t able to do it. I think the point is that under those cirumstances one has ${r = R - O(a)}$ and ${\cos \alpha \ll a}$ even as ${\cos \theta \gg 0}$, so we have some control on ${\cos(\theta - \alpha)}$. In fact I was able to compute $\displaystyle \int_{I_K} P(Re^{i\theta}, re^{i\alpha}) ~d\theta = -\sum_m (r/R)^{|m|}\frac{e^{-im(\alpha + \delta) + e^{-im(\alpha - \delta)}}}{m}$ which suggests that this is the right direction, but the bounds I got never seemed to go anywhere. Someone bug me in the comments if there’s an easy way to do this that I somehow missed. $\Box$ Now we take ${R = 1 + n^{-1/3}}$ to complete the proof. 4. Case one Suppose that ${1 - a}$ is infinitesimal. Let ${\mu}$ be the expected value of ${\lambda}$ (hence also of ${\zeta}$). Let ${0 < \delta \leq 1/2}$ be a standard real. We first need to go on an excursion to a paper of Dégot, who proves the following theorem: Lemma 9 One has $\displaystyle |f'(a)| \geq cn |f(\delta)|.$ Moreover, $\displaystyle |f(\delta)| \leq (1 + \delta^2 - 2\delta \text{Re }\mu)^{n/2}.$ I will omit the proof since it takes some complex analysis I’m pretty unfamiliar with. It seems to need Grace’s theorem, which I guess is a variant of one of the many theorems in complex analysis that says that the polynomial image of a disk is kind of like a disk. It also uses some theorem called the Walsh contraction principle that involves polynomials on the projective plane. Curious. In what follows we will say that an event ${E}$ is standard-possible if the probability that ${E}$ happens has positive standard part. Lemma 10 For every ${\varepsilon > 0}$, ${\mathbf P(\text{Re }\zeta \leq \varepsilon)}$ is standard-possible. Besides, ${|f'(a)| > n}$. Proof: Since ${|\zeta - a| > 1}$ almost surely and $\displaystyle U_\zeta(a) = -\frac{\log n}{n - 1} - \frac{1}{n - 1} \log |f'(a)|$ but $\displaystyle U_\zeta(a) = -\mathbf E \log |\zeta - a| < 0,$ we have $\displaystyle |f'(a)| > n.$ Combining this with the lemma we see that the standard part of ${|f(\delta)|}$ is ${> 0}$, so $\displaystyle 1^{1/n} \leq O(\sqrt{1 + \delta^2 + \delta\text{Re }\mu}).$ On the other hand, $\displaystyle 1 - O(n^{-1}) \leq 1^{1/n}$ and since ${n}$ is nonstandard, ${1/n}$ is infinitesimal, so the constant in ${O(\sqrt{1 + \delta^2 + \text{Re }\mu})}$ gets eaten. In particular, $\displaystyle 1 - O(n^{-1}) \leq \sqrt{1 + \delta^2 + \delta\text{Re }\mu}$ which implies that $\displaystyle 1 + o(1) \leq 1 + \delta^2 + \delta\text{Re }\mu$ and hence $\displaystyle \text{Re }\mu \leq \frac{\delta}{2} + o(1).$ Since this is true for arbitrary standard ${\delta}$, underspill implies that there is an infinitesimal ${\kappa}$ such that $\displaystyle \text{Re }\mu \leq \kappa.$ But ${|\text{Re }\zeta| \leq 1}$ almost surely, and we just showed $\displaystyle \mathbf E\text{Re }\zeta \leq \kappa.$ So the claim holds. $\Box$ We now allow ${\delta}$ to take the value ${0}$, thus ${0 \leq \delta \leq 1/2}$. Lemma 11 One has $\displaystyle |f(0)| \sim |f(\delta)| \sim 1$ and $\displaystyle |f'(a)| \sim n.$ Moreover, ${|f(z)| \sim 1}$ if ${|z - 1/2| < 1/100}$, so ${f}$ has no zeroes ${z}$ in that disk. Proof: Since $\displaystyle \mathbf E \log\frac{1}{|z - \zeta|} = -\frac{\log n}{n - 1} - \frac{1}{n - 1} \log |f'(z)|$ one has $\displaystyle \log |f'(a)| - \log |f'(\delta)| = (n-1)\mathbf E \frac{|a - \zeta|}{|\delta - \zeta|}.$ Now ${|a - \zeta| \geq 1}$ and ${|\zeta| \leq 1}$. Here I drew two unit circles in ${\mathbf C}$, one entered at the origin and one at ${1}$ (since ${|a - 1|}$ is infinitesimal); ${\zeta}$ is (up to infinitesimal error) in the first circle and out of the second. The rightmost points of intersection between the two circles are on a vertical line which by the Pythagorean theorem is to the left of the vertical line ${x = a/2}$, which in turn is to the left of the perpendicular bisector ${x = (a+\delta)/2}$ ${[\delta, a]}$. Thus ${|a - \zeta| \geq |\delta - \zeta|}$, and if ${|\delta - \zeta| = |a - \zeta|}$ then the real part of ${\zeta}$ is ${(a+\delta)/2}$. In particular, if the standard real part of ${\zeta}$ is ${< 1/2}$ then ${|a - \zeta| > |\delta - \zeta|}$, so ${\log |a - \zeta|/|\delta - \zeta|}$ has positive standard part. By the previous lemma, it is standard-possible that the standard real part of ${\zeta}$ is ${\leq 1/4 < 1/2}$, so the standard real part of ${\zeta}$ is standard-possibly positive and ${\mathbf E \log|a-\zeta|/|\delta - \zeta|}$ is almost surely nonnegative. Plugging into the above we deduce the existence of a standard absolute constant ${c > 0}$ such that $\displaystyle \log |f'(a)| - \log |f'(\delta)| \geq cn.$ In particular, $\displaystyle f'(\delta) \leq |f'(\delta)| \leq e^{-cn} |f'(a)|.$ Keeping in mind that ${|f'(a)| > n}$ is nonstandard, this doesn’t necessarily mean that ${f'(\delta)}$ has nonpositive standard part, but it does give a pretty tight bound. Taking a first-order Taylor approximation we get $\displaystyle f(0) = f(\delta) + O(e^{-cn|f'(a)|}).$ But one has $\displaystyle |f'(a)| \geq cn |f(\delta)|$ from the Dégot lemma. Clearly this term dominates ${e^{-cn}|f'(a)|}$ so we have $\displaystyle |f(0)| \geq \frac{c}{n} |f'(a)|.$ Since one has a lower bound ${|f'(a)| > n}$ this implies ${|f(0)|}$ is controlled from below by an absolute constant. We also claim ${|f(0)| \leq 1}$. In fact, we showed last time that $\displaystyle -U_\lambda(0) = \frac{1}{n} \log |f(0)|;$ we want to show that ${\log |f(0)| \leq 0}$, so it suffices to show that ${U_\lambda(0) \geq 0}$, or in other words that $\displaystyle \mathbf E \log |\lambda| \leq 0.$ Since ${|\lambda| \leq 1}$ by assumption on ${f}$, this is trivial. We deduce that $\displaystyle |f(0)| \sim |f(\delta)| \sim 1$ and hence $\displaystyle |f'(a)| \sim n.$ Now Tao claims that the proof that ${|f(z)| \sim 1}$ is similar, if ${|z - 1/2| < 1/100}$. Since ${\delta = 1/2}$ was a valid choice of ${\delta}$ we have ${|f(1/2)| \sim 1}$. Since ${|z - 1/2| < 1/100}$, if ${\text{Re }\zeta \leq 1/4}$ then ${|a - \zeta|/|z - \zeta| \geq c > 1}$ where ${c}$ is an absolute constant. Applying the fact that ${\text{Re }\zeta \leq 1/4}$ is standard-possible and ${\mathbf E \log|a-\zeta|/|z - \zeta|}$ is almost surely nonnegative we get $\displaystyle f'(z) \leq e^{-cn} |f'(a)|$ so we indeed have the claim. $\Box$ We now prove the desired bound $\displaystyle \mathbf E \log \frac{1}{|\lambda|} \leq O(n^{-1}).$ Actually, $\displaystyle \mathbf E \log \frac{1}{|\lambda|} = \frac{1}{n} \log \frac{1}{|f(0)|}$ as we proved last time, so the bound ${|f(0)| \sim 1}$ guarantees the claim.4 In particular $\displaystyle \mathbf E \log \frac{1}{|\lambda^{(\infty)}|} = 0$ by Fatou’s lemma. So ${|\lambda^{(\infty)}| = 1}$ almost surely. Therefore ${U_{\lambda^{(\infty)}}}$ is harmonic on ${D(0, 1)}$, and we already showed that ${|f(z)| \sim 1}$ if ${|z - 1/2|}$ was small enough, thus $\displaystyle U_\lambda(z) = O(n^{-1})$ if ${|z - 1/2|}$ was small enough. That implies ${U_{\lambda^{(\infty)}} = 0}$ on an open set and hence everywhere. Since $\displaystyle U_\eta(Re^{i\theta}) = \frac{1}{2} \sum_{m \neq 0} \frac{e^{im\theta}}{|m|} \mathbf E\eta^{|m|}$ we can plug in ${\eta = \lambda^{(\infty)}}$ and conclude that all moments of ${\lambda^{(\infty)}}$ except the zeroth moment are zero. So ${\lambda^{(\infty)}}$ is uniformly distributed on the unit circle. By overspill, I think one can intuit that if ${f}$ is a random polynomial of high degree which has a zero close to ${1}$, all zeroes in ${D(0, 1)}$, and no critical point close to ${a}$, then ${f}$ sort of looks like $\displaystyle z \mapsto \prod_{k=0}^{n-1} z - \omega^k$ where ${\omega}$ is a primitive root of unity of the same degree as ${f}$. Therefore ${f}$ looks like a cyclotomic polynomial, and therefore should have lots of zeroes close to the unit sphere, in particular close to ${1}$, a contradiction. This isn’t rigorous but gives some hint as to why this case might be bad. Now one has $\displaystyle \mathbf E \log |\zeta - a| = \frac{1}{n} \log \frac{|f'(a)|}{n} = O(n^{-1})$ and in particular by Fatou’s lemma $\displaystyle \mathbf E \log |\zeta^{(\infty)} - 1| = 0.$ But it was almost surely true that ${\zeta^{(\infty)} \notin D(1, 1)}$, thus that ${\log |\zeta^{(\infty)} - 1| \geq 0}$. So this enforces ${\zeta^{(\infty)} \in \partial D(1, 1)}$ almost surely. In particular, almost surely, $\displaystyle \zeta^{(\infty)} \in \partial D(1, 1) \cap \overline{D(0, 1)} = \gamma.$ Since ${\gamma}$ is a contractible curve, its complement is connected. We recall that ${U_{\lambda^{(\infty)}} = U_{\zeta^{(\infty)}}}$ near infinity, and since we already know the distribution of ${\lambda^{(\infty)}}$, we can use it to compute ${U_{\zeta^{(\infty)}}}$ near infinity. Tao says the computation of ${U_{\zeta^{(\infty)}}}$ is a straightforward application of the Newtonian shell theorem; he’s not wrong but I figured I should write out the details. For ${\eta = \lambda^{(\infty)}}$ one has $\displaystyle U_\eta(z) = \mathbf E \log \frac{1}{|z - \eta|} = \frac{1}{2\pi} \int_{\partial D(0, 1)} \log \frac{1}{|z - w|} ~d|w|$ where the ${d|w|}$ denotes that this is a line integral in ${\mathbf R^2}$ rather than in ${\mathbf C}$. Translating we get $\displaystyle U_\eta(z) =- \frac{1}{2\pi} \int_{\partial D(z, 1)} \log |w| ~d|w|$ which is the integral of the fundamental solution of the Laplace equation over ${\partial D(z, 1)}$. If ${|z| > 1}$ (reasonable since ${z}$ is close to infinity), this implies the integrand is harmonic, so by the mean-value formula one has $\displaystyle U_\eta(z) = -\log |z|$ and so this holds for both ${\eta = \lambda^{(\infty)}}$ and ${\eta = \zeta^{(\infty)}}$ near infinity. But then ${\zeta^{(\infty)}}$ is harmonic away from ${\gamma}$, so that implies that $\displaystyle U_{\zeta^{(\infty)}} = \log \frac{1}{|z|}.$ Since the distribution ${\nu}$ of ${\zeta^{(\infty)}}$ is the Laplacian of ${U_{\zeta^{(\infty)}}}$ one has $\displaystyle \nu = \Delta \log \frac{1}{|z|} = \delta_0.$ Therefore ${\zeta^{(\infty)} = 0}$ almost surely. In particular, ${\zeta}$ is infinitesimal almost surely. This completes the proof in case one. By the way, I now wonder if when one first learns PDE it would be instructive to think of the fundamental solution of the Laplace equation and the mean-value formulae as essentially a consequence of the classical laws of gravity. Of course the arrow of causation actually points the other way, but we are humans living in a physical world and so have a pretty intuitive understanding of what gravity does, while stuff like convolution kernels seem quite abstract. Next time we’ll prove a contradiction for case zero, and maybe start on the proof for case one. The proof for case one looks really goddamn long, so I’ll probably skip or blackbox some of it, maybe some of the earlier lemmata, in the interest of my own time. # Let’s Read: Sendov’s conjecture in high degree, part 1 Having some more free time than usual, I figured I would read a recent paper that looked interesting. However, I’m something of a noob at math, so I figure it’s worth it to take it slowly and painstakingly think through the details. This will be a sort of stream-of-conciousness post where I do just that. The paper I’ll be reading will be Terry Tao’s “Sendov’s conjecture for sufficiently high degree polynomials.” This paper looks interesting to me because it applies “cheap nonstandard” methods to prove a result in complex analysis. In addition it uses probability-theoretic methods, which I’m learning a lot of right now. Sendov’s conjecture is the following: Sendov’s conjecture Let ${f: \mathbf C \rightarrow \mathbf C}$ be a polynomial of degree ${n \geq 2}$ that has all zeroes ${\leq 1}$. If ${a}$ is a zero then ${f'}$ has a zero in ${\overline{D(a, 1)}}$. Without loss of generality, we may assume that ${f}$ is monic and that ${a \in [0, 1]}$. Indeed, if ${f}$ is a polynomial in the variable ${z}$ we divide by the argument of ${z}$ and then rescale by the top coefficient. Tao notes that by Tarski’s theorem, in principle it suffices to get an upper bound ${n_0}$ on the degree of a counterexample and then use a computer-assisted proof to complete the argument. I think that for every ${n \leq n_0}$ you’d get a formula in the theory of real closed fields that could be decided in ${O(n^r)}$ time, where ${r}$ is an absolute constant (which, unfortunately, is exponential in the number of variables of the formula, and so is probably quite large). Worse, Tao is going to use a compactness argument and so is going to get an astronomical bound ${n_0}$. Still, something to keep in mind — computer-assisted proofs seem like the future in analysis. More precisely, Tao proves the following: Proposition 1 For every ${n}$ in a monotone sequence in ${\mathbf N}$, let ${f}$ be a monic polynomial of degree ${n}$ with all zeroes in ${\overline{D(0, 1)}}$ and let ${a \in [0, 1]}$ satisfy ${f(a) = 0}$. If for every ${n}$, ${f'}$ has no zeroes in ${\overline{D(a, 1)}}$, then ${0 = 1}$. It’s now pretty natural to see how “cheap nonstandard” methods apply. One can pass to a subsequence countably many times and still preserve the hypotheses of the proposition, so by diagonalization and compactness, we may assume good (but ineffective) convergence properties. For example, we can assume that ${a = a^{(\infty)} + o(1)}$, where ${a^{(\infty)}}$ does not depend on ${n}$ and ${o(1)}$ is with respect to ${n}$. Using overspill, one can view the proposition model-theoretically: it says that if ${n}$ is a nonstandard natural number, ${f}$ a monic polynomial of degree ${n}$ with a zero ${a \in [0, 1]}$, and there are no zeroes of ${f'}$ in ${\overline{D(a, 1)}}$, then ${0 = 1}$. Tao never fully takes this POV, but frequently appeals to results like the following: Proposition 2 Let ${P}$ be a first-order predicate. Then: 1. (Overspill) If for every sufficiently small standard ${\varepsilon}$, ${P(\varepsilon)}$, then there is an infinitesimal ${\delta}$ such that ${P(\delta)}$. 2. (Underspill) If for every infinitesimal ${\delta}$, ${P(\delta)}$, then there are arbitrarily small ${\varepsilon}$ such that ${P(\varepsilon)}$. 3. (${\aleph_0}$-saturation) If ${P}$ is ${\forall z \in K(f(z) = O(1))}$ where ${K \subseteq \mathbf C}$ is compact, then the implied constant in the statement of ${P}$ is independent of ${z}$. Henceforth we will use asymptotic notation in the nonstandard sense; for example, a quantity is ${o(1)}$ if it is infinitesimal. This is equivalent to the cheap nonstandard perspective where a quantity is ${o(1)}$ iff it is with respect to ${n}$, where ${n}$ is ranging over some monotone sequence of naturals. I think the model-theoretic perspective is helpful here because we are going to pass to subsequences a lot, and at least in the presence of the boolean prime ideal theorem, letting ${n}$ be a fixed nonstandard natural number is equivalent to choosing a nonprincipal ultrafilter on ${\mathbf N}$ that picks out the subsequences we are going to pass to, in the perspective where ${n}$ ranges over a monotone sequence of standard naturals. This follows because the Stone-Cech compactification ${\beta \mathbf N}$ is exactly the space of ultrafilters on ${\mathbf N}$. Indeed, if ${n}$ ranges over a monotone sequence of standard naturals, then in ${\beta \mathbf N}$, ${n}$ converges to an element ${U}$ of ${\beta \mathbf N \setminus \mathbf N}$, which then is a nonprincipal ultrafilter. If ${\mathbf N^\omega/U}$ denotes the ultrapower of ${\mathbf N}$ with respect to ${U}$, then I think the equivalence class of the sequence ${\{1, 2, \dots\}}$ in ${\mathbf N^\omega/U}$ is exactly the limit of ${n}$. Conversely, once a nonprincipal ultrafilter ${U \in \beta \mathbf N \setminus \mathbf N}$ has been fixed, we have a canonical way to pass to subsequences: only pass to a subsequence which converges to ${U}$. This is possible since ${\beta \mathbf N}$ is compact. I think it will be at times convenient to go back to the “monotone sequence of standard naturals” perspective, especially when we’re doing computations, so I reserve the right to go between the two. We’ll call the monotone sequence perspective the “cheap perspective” and the model-theoretic perspective the “expensive perspective”. I’m not familiar with the literature on Sendov’s conjecture, so I’m going to blackbox the reduction that Tao carries out. The reduction says that, due to the Gauss-Lucas theorem and previously existing partial results on Sendov’s conjecture, to prove Proposition 1, it suffices to show: Proposition 3 Let ${n}$ be a nonstandard natural, let ${f}$ be a monic polynomial of degree ${n}$ with all zeroes in ${\overline{D(0, 1)}}$ and let ${a \in [0, 1]}$ satisfy ${f(a) = 0}$. Suppose that ${f'}$ has no zeroes in ${\overline{D(0, 1)}}$ and 1. (Theorem 3.1 in Tao) either ${a = o(1/\log n)}$, or 2. (Theorem 5.1 in Tao) there is a standard ${\varepsilon_0 > 0}$ such that $\displaystyle 1 - o(1) \leq a \leq 1 - \varepsilon_0^n.$ Then ${0 = 1}$. In the former case we have ${a = o(1)}$ and in the latter we have ${|a - 1| = o(1)}$. We’ll call the former “case zero” and the latter “case one.” Tao gives a probabilistic proof of Proposition 3, and that’s going to be the bulk of this post and its sequels. Let ${\zeta}$ be a random zero of ${f'}$, drawn uniformly from the finite set of zeroes. Le ${\lambda}$ denote a random zero of ${f}$ chosen independently of ${\zeta}$. In the cheap perspective, ${\zeta}$ depends on ${n}$, we are going to study properties of the convergence of ${\zeta}$ as ${n \rightarrow \infty}$, by using our chosen ultrafilter to repeatedly pass to subsequences to make ${\zeta}$ converge in some suitable topology. The probability spaces that ${\zeta}$ lives in depend on ${n}$, but as long as we are interested in a deterministic limit ${c}$ that does not depend on ${n}$, this is no problem. Indeed ${\zeta}$ will converge to ${c}$ uniformly (resp. in probability) provided that for every ${\varepsilon > 0}$, ${|\zeta - c| \leq \varepsilon}$ almost surely (resp. ${\mathbf P(|\zeta - c| \leq \varepsilon) = 1 - o(1)}$), and this makes sense even though the probability space we are studying depends on ${n}$. The usual definition of convergence in distribution still makes sense even for random variables ${\zeta}$ converges to a random variable ${X}$ that deos not depend on ${n}$ provided that their distributions ${\zeta_*\mathbf P}$ converge vaguely to ${X_*\mathbf P}$. Okay, it’s pretty obvious what being infinitesimally close to a deterministic standard real is in the uniform or probabilistic sense. Expanding out the definition of the vague topology of measures, a nonstandard measure ${\mu}$ on a locally compact Hausdorff space ${Q}$ is infinitesimally close to a standard measure ${\nu}$ provided that for every continuous function ${f}$ with compact support, $\displaystyle \left|\int_Q f~d\mu - \int_Q f~d\nu\right| = o(1).$ This induces a definition of being infinitesimally close in distribution. Okay, no more model-theoretic games, it’s time to start the actual proof. Definition 4 Let ${\eta}$ be a bounded complex random variable. The logarithmic potential of ${\eta}$ is $\displaystyle U_\eta(z) = \mathbf E \log \frac{1}{|z - \eta|}.$ Here ${\mathbf E}$ denotes expected value. Tao claims but does not prove that this definition makes sense for almost every ${z \in \mathbf C}$. To check this, let ${K}$ be a compact set ${\subseteq \mathbf C}$ equipped with Lebesgue measure ${\mu}$ and let ${\nu}$ be the distribution of ${\zeta}$. Then $\displaystyle \int_K U_\eta(z) ~d\mu(z) = \int_K \int_\mathbf C \log \frac{1}{|z - \omega|} ~d\nu(\omega) ~d\mu(z)$ and the integrand is singular along the set ${\{z = \omega\}}$, which has real codimension ${2}$ in ${K \times \text{supp} \nu}$. The double integral of a logarithm makes sense almost surely provided that the logarithm blows up with real codimension ${2}$ (to see this, check the double integral of log ${1/x}$ on ${\mathbf R^2}$) so this looks good. Definition 5 Let ${\eta}$ be a bounded complex random variable. The Stieltjes transform of ${\eta}$ is $\displaystyle s_\eta(z) = \mathbf E \frac{1}{z - \eta}.$ Then ${s_\eta}$ is “less singular” than ${U_\eta}$, so this definition is inoffensive almost everywhere. Henceforth Tao lets ${\mu_\eta}$ denote the distribution of ${\eta}$, where ${\eta}$ is any bounded complex random variable. Then ${s_\eta = -\partial U_\eta}$ and ${\mu_\eta = \overline \partial s_\eta/2\pi}$ where ${\partial,\overline \partial}$ are the complex derivative and Cauchy-Riemann operators respectively. Since ${\partial \overline \partial = \Delta}$ we have $\displaystyle 2\pi\mu_n = -\Delta s_\eta.$ This just follows straight from the definitions. Of course ${\mu_\eta}$ might not be an absolutely continuous measure, so this only makes sense if we use the calculus of distributions. Does this make sense if ${\eta}$ is deterministic, say ${\eta = 0}$ almost surely? In that case ${\mu_\eta}$ is a Dirac measure at ${0}$ and ${s_\eta(z) = 1/z}$. Everything looks good, since ${U_\eta(z) = -\log |z|}$. For the next claims I need the Gauss-Lucas theorem: Theorem 6 (Gauss-Lucas) If ${P}$ is a polynomial on ${\mathbf C}$, then all zeroes of ${P'}$ belong to the convex hull of the variety ${P = 0}$. Lemma 7 (Lemma 1.6i in Tao) ${\lambda}$ surely lies in ${\overline{D(0, 1)}}$ and ${\zeta}$ surely lies in ${\overline{D(0, 1)} \setminus \overline{D(a, 1)}}$. Proof: The first claim is just a tautology. For the second, by assumption on ${f}$ all zeroes of ${f}$ lie in the convex set ${\overline{D(0, 1)}}$, so so does their convex hull. In particular ${\zeta}$ lies in ${\overline{D(0, 1)}}$ almost surely by the Gauss-Lucas theorem. Our contradiction hypothesis says that ${\zeta \notin \overline{D(a, 1)}}$. $\Box$ Lemma 8 (Lemma 1.6ii-iv in Tao) One has ${\mathbf E\lambda = \mathbf E\zeta}$. For almost every ${z \in \mathbf C}$, $\displaystyle U_\lambda(z) = -\frac{1}{n} \log |f(z)|$ and $\displaystyle U_\zeta(z) = -\frac{\log n}{n - 1} - \frac{1}{n - 1} \log |f'(z)|.$ Moreover, $\displaystyle s_\lambda(z) = \frac{1}{n} \frac{f'(z)}{f(z)}$ and $\displaystyle s_\zeta(z) = \frac{1}{n - 1} \frac{f''(z)}{f'(z)}.$ Moreover, $\displaystyle U_\lambda(z) - \frac{n-1}{n}U_\zeta(z) = \frac{1}{n} \log |s_\lambda(z)|$ and $\displaystyle s_\lambda(z) - \frac{n - 1}{n} s_\zeta(z) = -\frac{1}{n} \frac{s'_\lambda(z)}{s_\lambda(z)}.$ Proof: We pass to the cheap perspective, so ${n}$ is a large standard natural. Since ${n}$ is large, in particular ${n \geq 2}$, if ${b_{n-1}}$ is the coefficient of ${z^{n-1}}$ in ${f}$ then the roots of ${f}$ sum to ${-b_{n-1}}$. The roots of ${f'}$ sum to ${-(n-1)b_{n-1}/n}$ by calculus. So ${\mathbf E\lambda = \mathbf E\zeta}$. We write $\displaystyle f(z) = \prod_{j=1}^n z - \lambda_j$ and $\displaystyle f'(z) = n\prod_{j=1}^{n-1} z - \zeta_j.$ Taking ${-\log|\cdot|}$ of both sides and then dividing by ${n}$ we immediately get ${U_\lambda}$ and ${U_\zeta}$. Then we take the complex derivative of both sides of the ${U_\lambda}$ and ${U_\zeta}$ formulae to get the formulae for ${s_\lambda}$ and ${s_\zeta}$. Now the formula for ${\log |s_\lambda|}$ follows by subtracting the above formulae, as does the formula for ${s_\lambda'/s_\lambda}$. $\Box$ Since the distributions of ${\lambda}$ and ${\zeta}$ have bounded support (it’s contained in ${\overline{D(0, 1)}}$) by Prokhorov’s theorem we can find standard random variables ${\lambda^{(\infty)}}$ and ${\zeta^{(\infty)}}$ such that ${\lambda - \lambda^{(\infty)}}$ is infinitesimal in distribution and similarly for ${\zeta}$. The point is that ${\lambda^{(\infty)}}$ and ${\zeta^{(\infty)}}$ give, up to an infinitesimal error, information about the behavior of ${f}$, ${f'}$, and ${f''}$ by the above lemma and the following proposition. Proposition 9 (Theorem 1.10 in Tao) One has: 1. In case zero, ${\lambda^{(\infty)}}$ and ${\zeta^{(\infty)}}$ are identically distributed and almost surely lie in ${C = \{e^{i\theta}: 2\theta \in [\pi, 3\pi]\}}$, so ${d(\lambda, C)}$ is infinitesimal in probability. Moreover, for every compact set ${K \subseteq \overline{D(0, 1)} \setminus C}$, $\displaystyle \mathbf P(\lambda \in K) = O\left(a + \frac{\log n}{n^{1/3}}\right).$ 2. In case one, ${\lambda^{(\infty)}}$ is uniformly distributed on ${\partial D(0, 1)}$ and ${\zeta^{(\infty)}}$ is almost surely zero. Moreover, $\displaystyle \mathbf E \log \frac{1}{|\lambda|}, \mathbf E\log |\zeta - \eta| = O\left(\frac{1}{n}\right).$ The proposition gives quantitative bounds that force the zeroes to all be in certain locations. Looking ahead in the paper, it follows that: 1. In case zero, the Stieltjes transform of ${\lambda^{(\infty)}}$ is infinitesimally close to ${f'/nf}$, so by a stability-of-zeroes argument to show that ${f}$ has no zeroes near the origin, even though ${a = o(1)}$. 2. In case one, if ${\sigma}$ is the standard deviation of ${\zeta}$, then we have control on the zeroes of ${f}$ up to an error of size ${o(\sigma^2) + o(1)^n}$, which we can then use to deduce a contradiction. Next time I’ll start the proof of this proposition. Its proof apparently follows from the theory of Newtonian potentials, which is not too surprising since ${-\Delta \log x = \delta(x)}$ if ${\Delta}$ is the Laplacian of ${\mathbf R^2}$. It needs the following lemma: Lemma 10 (Lemma 1.6vi in Tao) If ${\gamma}$ is a curve in ${\mathbf C}$ that misses the zeroes of ${f}$ and ${f'}$ then $\displaystyle f(\gamma(1)) = f(\gamma(0)) \exp\left(n \int_\gamma s_\lambda(z) ~dz\right)$ and $\displaystyle f'(\gamma(1)) = f'(\gamma(0)) \exp\left((n-1) \int_\gamma s_\zeta(z) ~dz\right).$ Proof: One has $\displaystyle ns_\lambda(z) = \frac{f'(z)}{f(z)}$ by the previous lemma. Breaking up ${\gamma}$ into finitely many parts one can assume that ${\gamma}$ is a contractible curve in a simply connected set, in which case we have a branch of the logarithm along ${\gamma}$. Now apply the fundamental theorem. The case for ${s_\zeta}$ is the same. $\Box$ # Internalizing tricks: the Heine-Borel theorem I think that in analysis, the most important results are the tricks, not the theorems. I figure most analysts could prove any of the theorems in Rudin or Pugh at will, not because they have the results memorized, but because they know the tricks. So it’s really important to internalize tricks! Here’s an example of how we could take apart a proof of the Heine-Borel theorem that every closed bounded set is compact, and internalize some of the tricks in it. The proof we want to study is as follows. Step 1. We first prove that [0, 1] is compact. Let $(x_n)$ be a sequence in [0, 1] that we want to show has a convergent subsequence. Let $x_{n_1} = x_1$ and let $I_1 = [0, 1]$. Step 2. Suppose by induction that we are given $I_1, \dots, I_J$ such that $I_j$ is a subinterval of $I_{j-1}$ of half length and there is a subsequence of $(x_n)$ in $I_j$, and $x_{n_j} \in I_j$. By the pigeonhole principle, since there are infinitely many points of $(x_n)$ in $I_j$, if we divide $I_j$ into left and right closed subintervals of equal length, one of those two subintervals has infinitely many points of $(x_n)$ as well. So let that subinterval be $I_{J+1}$ and let $(x_{n_{J+1}})$ be the first point of $(x_n)$ after $x_{n_J}$ in $I_{J+1}$. Step 3. After the induction completes we have a subsequence $(x_{n_j})$ of $(x_n)$. By construction, $x_{n_j} \in I_j$ and $I_{j+1}$ is half of $I_j$, so $|x_{n_j} - x_{n_{j+1}}| < 2^j$. That implies that $(x_{n_j})$ is a Cauchy sequence, so it converges in $\mathbf R$, say $x_{n_j} \to x$. Step 4. Since x is a limit of a sequence in [0, 1], and [0, 1] is closed, $x \in [0, 1]$. Therefore $(x_n)$ has a convergent subsequence. So [0, 1] is compact. Step 5. Now let $K = [0, 1]^n$ be a box. We claim that K is compact. To see this, let $(x_n)$ be a sequence in K. If n = 1, then $(x_n)$ has a convergent subsequence. Step 6. Suppose by induction that $[0, 1]^{n-1}$ is compact. Then we can write $x_n = (y_n, z_n)$ where $y_n \in [0, 1]^{n-1}$, $z_n \in [0, 1]$. So there is a convergent subsequence $(y_{n_k})$. Now $(z_{n_k})$ has a convergent subsequence $(z_{n_{k_j}})$, and then $(x_{n_{k_j}})$ is a convergent subsequence. So K is compact. Step 7. Now let K be closed and bounded. So there is a box $L = [-R, R]^n$ such that $K \subseteq L$. Without loss of generality, assume that R = 1. Step 8. Since K is a closed subset of the compact set L, K is compact. Let’s look at the tricks used at each stage: Step 1. We want to show that an arbitrary closed and bounded set is compact. This sounds quite hard, as such sets can be nasty; however, it is often the case that if you can prove a special case of the theorem, the general theorem follows. Since [0, 1] is the prototypical example of a compact set, and is much nicer than e.g. Cantor dust in 26 dimensions, we first try to prove the Heine-Borel theorem on [0, 1]. Step 2. Here we use the informal principle that compactness is equivalent to path-finding in an infinite binary tree. That is, compactness requires us to make infinitely many choices, which is exactly the same thing as finding a path through an infinitely large tree, where we will have to choose whether to go left or right infinitely many times. Ideally every time we choose whether we go left or right, we will cut down on the complexity of the problem by half. Here the “complexity” is the size of the interval we’re looking at. This notion of “compactness” is ubiquitous in analysis, combinatorics, and logic. It is the deepest part of the proof of the Heine-Borel theorem, and is known as Koenig’s lemma. Step 2 has another key idea to it. We need to make infinitely many choices, so we make infinitely many choices using induction. In general when traversing a graph, inducting on the length of the path so far will come in handy. If you don’t know which way to go, the pigeonhole principle and other nonconstructive tricks will also be highly useful here. Step 3. Compactness gave us a subsequence, but we don’t know what the limit is. But to prove that a sequence converges without referring to an explicit limit, instead show that it is Cauchy. Actually, here we are forced to do this, because the argument of Step 2 could’ve been carried out over the rational numbers, yet the conclusion of the Heine-Borel theorem is false there. So this step could also be interpreted as make sure to use every hypothesis; here the hypothesis that we are working over the reals is key. Step 4. Make sure to use every hypothesis; up to this point we’ve only used that [0, 1] is bounded, not closed. Step 5. Here we again reason that if you can prove a special case of the theorem, the general theorem follows. Step 6. Here n is an arbitrary natural number, so we prove a theorem about every natural number using induction. This is especially nice because the idea behind this proof was to build up the class of compact set iteratively, initializing with the unit interval; at every stage of this induction we also get a unit box. This trick can be viewed as a special case of if you can prove a special case of the theorem, the general theorem follows: indeed, proving a theorem for every natural number would require infinitely many cases to be considered, but here there are just two, the base case and the inductive case. The inductive case was really easy, so the thing we are really interested in is the base case. Step 7. Here we abstract away unnecessary parameters using symmetry. The parameter R is totally useless because topological notions don’t care about scaling. However, we do have a box, and it would be nice if it was a unit box because we just showed that unit boxes are compact. So we might as well forget about R and just assume it’s 1. Step 8. Once again we make sure to use every hypothesis; the boundedness got us inside a box, so the closedness must be used to finish the proof. # Boolean algebras and probability Lately I’ve been thinking a lot about the basics of probability theory. I originally learned the definition of probability measure from a logician — Wes Holliday — and his definition has probably permanently influenced how I think about probability, but I think that even analysts like myself agree that the formulation of probability in terms of sample spaces is a little unnatural. For example, one typically makes it a point to never mention specific points in the sample space, but only events — elements of a sigma-algebra over the sample space. Moreover, one typically identifies an event which happens almost surely with the sure event. Random variables are identified with their distributions, and so isomorphism of random variables “forgets” about events that almost surely do not happen as well as individual points in the sample space. Sample spaces are also utterly uninteresting: up to measurable isomorphism, there is only one standard Borel space, and in practice most probability spaces are Polish and therefore standard. There is also a lot of “junk” in the power set of a standard Borel space; all but a lesser cardinality of sets are not measurable. Simply put, the sample space contains far more information than a probabilist is willing to use. Let me stress that one actually gets to serious probability theory, which I guess I would define as beginning when one proves the central limit theorem, the above is not terribly important (though it can help — see An uncountable Moore-Schmidt theorem, a paper in ergodic theory which in part inspired this blog post). Worrying about whether one has extra information won’t help you prove inequalities, which is what analysis is all about. But I think when one is still learning the basics, or is considering results which are either deeply infinitary (as in the case of the uncountable Moore-Schmidt theorem) or purely “measurable” (only having to do with sigma-algebras), it is instructive to step back and think about what a sample space really is. As for myself, I’m on the teaching staff of a measure theory course this semester, and an exercise this week was to prove the following theorem, which appears in the exercises of both Billingsley and father Rudin. Theorem 1. If $\Sigma$ is an infinite sigma-algebra, then $\Sigma$ has cardinality at least that of $\mathbb R$. This exercise is hard essentially because it is not measure-theoretic. In fact, while the student is probably tempted to concoct a horribly complicated measure-theoretic argument, the statement is purely combinatorial in nature, and has a simple combinatorial proof if one forgets the underlying set $\Omega$ over which $\Sigma$ is defined. Of course, this proof is totally equivalent to a similar proof about sigma-algebras, but there will be some pitfalls — I will point out one in particular — that arise if one is tempted to refer to $\Omega$, and so that one is not tempted by Lucifer in his guise $\Omega$, it is better to remove that temptation entirely. Before proving Theorem 1, let me demonstrate how one can think about measure theory and probability without ever referring to $\Omega$. Definition 2. By a countably complete lattice, one means a partially ordered set $\mathbb B$ such that every countable subset $X \subseteq \mathbb B$ has a supremum and infimum. In this case, we write $x \wedge y$ (read “$x$ and $y$“) to denote $\inf \{x, y\}$, and similarly $x \vee y$ (“$x$ or $y$“) for $\sup \{x, y\}$. One says “$x$ implies $y$” to mean $x \leq y$. Definition 3. By a countably complete boolean algebra one means a countably complete lattice $\mathbb B$ equipped with a minimum $0$ (“false”) and a maximum $1$ (“true”), such that: • For every $x, y, z \in \mathbb B$, $x \wedge (y \vee z) = (x \wedge y) \vee (x \wedge z)$. • For every $x \in \mathbb B$, there is a unique $!x \in \mathbb B$ (“not $x$“) such that $x \wedge !x = 0$ and $x \vee !x = 1$. Since we will only need countably complete boolean algebras, we will simply refer to such objects as boolean algebras. One immediately checks that the morphisms in the category of boolean algebras are those maps which preserve countable “and”, countable “or”, “true”, and “false”; that the usual laws of classical logic hold in any boolean algebra; and that every sigma-algebra is a boolean algebra under $\subseteq$. Conversely, I think that given a suitable form of the axiom of choice, one can recover a measurable space $(\Omega, \mathbb B)$ from a boolean algebra $\mathbb B$ by letting $\Omega$ consist of a suitable subset of the set of ultrafilters on $\mathbb B$; however, the choice of $\Omega$ is not unique (a nice exercise). The setting of boolean algebras feels like a very natural setting for probability theory to me, because probability theory by design is concerned with events, things for which it makes sense to conjoin with the words “and” and “or” and “not”. One can now easily use the language of propositional logic: for example, two events $E,F$ are mutually exclusive if $E \wedge F = 0$. Let us now state the usual basic notions of probability theory in this new language: Definition 4. Let $\mathbb B$ be a boolean algebra. A probability measure $\mu$ on $\mathbb B$ is a mapping $\mu: \mathbb B \to [0, 1]$ such that, whenever $E_i$ are a sequence of pairwise mutually exclusive events, $\mu\left(\bigvee_i E_i\right) = \sum_i \mu(E_i)$. Definition 5. Let $\mathbb B$ be a boolean algebra and $P = (P, \Sigma)$ be a standard Borel space. A $P$-valued random variable $X$ on $\mathbb B$ is a morphism of boolean algebras $X: \Sigma \to \mathbb B$. Definition 6. Let $\mathbb B$ be a boolean algebra and $P = (P, \Sigma)$ be a standard Borel space. Let $X$ be a $P$-valued random variable on $\mathbb B$. If $\mu$ is a probability measure on $\mathbb B$ and $E \in \Sigma$, one writes $\mu(X \in E)$ to mean $\mu(X(E))$. The distribution of $X$ is the measure $\mu_X(E) = \mu(X \in E)$ on $P$. If $P = \mathbb R$, the cdf of $X$ is the function $x \mapsto \mu(X \leq x)$. Definition 7. Let $\mathbb B,\mathbb B'$ be boolean algebras. Let $X,X'$ be random variables on $\mathbb B,\mathbb B'$ respectively. A morphism of random variables $F: X \to X'$ is a morphism of boolean algebras $F: \mathbb B \to \mathbb B'$ such that $F \circ X = X'$. Given these definitions, I think that it is now straightforward to formulate the central limit theorem, Borel-Cantelli lemma, et cetra. One can drop the word “almost” from many such statements by passing to the quotient $\mathbb B/N$ where $N$ is the ideal of events that almost never happen. We now prove Theorem 1 in two lemmata. Lemma 8. Let $\mathbb B$ be an infinite boolean algebra. Then there exists a infinite set $A \subseteq \mathbb B$ of pairwise mutually exclusive events. To prove Lemma 8, let $\mathbb B_0 = \mathbb B$ and reason by induction. Suppose that we have an infinite boolean algebra $\mathbb B_n$ which is a quotient of $\mathbb B_{n-1}$ and events $x_i \in \mathbb B_i,~i \leq n -1$. Since $\mathbb B_n$ is infinite, there exists $x_n \in \mathbb B_n$ such that $\{y \in \mathbb B_n: y \leq x_n\}$ is infinite and $!x_n \neq 0$. We now condition on $x_n$, thus set $\mathbb B_{n+1} = \mathbb B_n/(!x_n)$. Here $(!x_n)$ is the ideal generated by $!x_n$. There is a natural bijection $\{y \in \mathbb B_n: y \leq x_n\} \to \mathbb B_{n+1}$, since any element $y \in \mathbb B_n$ can be decomposed as $(y \wedge x_n) \vee (y \wedge !x_n)$, giving a decomposition $\mathbb B_n = \mathbb B_{n+1} \oplus (!x_n)$. Therefore we can view $x_n$ as not just an element of $\mathbb B_n$ but an element of $\mathbb B_0$ and as an element of $\mathbb B_{n+1}$ (where it is identified with $1$). The bijection $\{y \in \mathbb B_n: y \leq x_n\} \to \mathbb B_{n+1}$ implies that $\mathbb B_{n+1}$ is infinite, so we can keep the induction going. Viewing $x_n$ as an element of $\mathbb B_0$, one easily checks that the $x_i$ form a sequence of pairwise mutually exclusive events. This completes the proof of Lemma 8. Here, a lot of students got stuck (in the usual “sample space” language). The reason is that they tried to construct the set $A$ using properties of $\Omega$. For example, a common thing to do was to try to let $A$ be in bijection with $\Omega$ by letting $x_\omega = \bigcap \{y \in \mathbb B: \omega \in y\}$. In general $x_\omega$ might be a nonmeasurable set; this can be fixed by assuming towards contradiction that $\mathbb B$ is countable. More importantly, the map $\omega \mapsto x_\omega$ is not injective in general, essentially for the same reason that $\Omega$ is not uniquely determined by $\mathbb B$. By replacing $\Omega$ with a suitable quotient, and showing that this quotient is also infinite, one can force $\omega \mapsto x_\omega$ to be injective, but this can be rather messy. In addition, assuming that $\Omega$ is countable will weaken this lemma, so that it does not imply Theorem 1 unless one invokes the continuum hypothesis. Moral: do not ever refer to points in the sample space! Lemma 9. Let $\mathbb B$ be a boolean algebra and suppose that there is a countably infinite set $A \subseteq \mathbb B$ of pairwise mutually exclusive events. Then the boolean algebra $\sigma(A)$ generated by $A$ has cardinality at least $2^{\aleph_0}$. To prove Lemma 9, let $\{x_i\}_{i \in \mathbb N}$ be an enumeration of $A$. Let $I$ be a set of natural numbers and define $f(I) = \bigvee_{i \in I} x_i \vee \bigvee_{i \notin I} !x_i$. Then $f$ is an injective map $I \to A$. Indeed, if $f(I) = f(J)$, then suppose without loss of generality that $i \in I \setminus J$. Then $f(I) \geq x_i$ and $f(I) \geq !x_i$, so $f(I) \geq x_i \vee !x_i = 1$. Since the $x_i$ are pairwise mutually exclusive, the only way that $f(I) = 1$ is if $I = J = \mathbb N$. This proves Lemma 9, and hence Theorem 1. # A “friendly” proof of the Heine-Borel theorem I learned a cute proof that [0, 1] is compact today, and felt the need to share it. By some basic point-set topology, this implies that [0, 1]^n is compact for any n, and then this implies that any closed and bounded subset of $\mathbb R^n$ is compact; the converse is easy, so this proves the Heine-Borel theorem. To prove it, we will use the infinite Ramsey theorem, which can be stated as “For every party with $\omega$ guests, either $\omega$ are all mutual friends, or $\omega$ are all mutual strangers.” There are various elementary or not-so-elementary proofs of the infinite Ramsey theorem; see Wikipedia’s article on Ramsey’s theorem for an example. It is a vast generalization of the result from elementary combinatorics that says that “For every party with 6 guests, either 3 are friends or 3 are strangers.” Here $\omega$ is the countable infinite cardinal. We now claim that every sequence $(x_n)_n$ of real numbers has a monotone subsequence. This completes the proof, since any bounded monotone sequence is Cauchy. Say that $n < m$ are friends if $x_n < x_m$; otherwise, say they are strangers. By the infinite Ramsey theorem, there is an infinite set $A \subseteq \mathbb N$ such that either all elements of A are friends, or all elements of A are strangers. Passing to the subsequence indexed by A, we find a subsequence consisting only of friends (in which case it is increasing) or of strangers (in which case it either has a subsequence which is constant, or a subsequence which is decreasing). By the way, this “friendly” property of $\omega$ is quite special. If $\kappa$ is an uncountable cardinal such that for every party with $\kappa$ guests, either $\kappa$ are mutual friends or $\kappa$ are mutual strangers, then we say that $\kappa$ is “weakly compact”, and can easily prove that $\kappa$ is a large cardinal in the sense that it is inaccessible and has a higher strength than an inaccessible cardinal (or even of a proper class of them). # Noncommutative spaces The central dogma of geometry is that a space X is the “same” data as the functions on X, i.e. the functions $X \to R$ for R some ring. In analysis, of course, we usually take R to be the real numbers, or an algebra over the real numbers. Some examples of this phenomenon: • A smooth manifold X is determined up to smooth diffeomorphism by the sheaf of rings of smooth functions from open subsets of X into $\mathbb R$. • An open subset X of the Riemann sphere is determined up to conformal transformation is determined by the algebra of holomorphic functions $X \to \mathbb C$. • An algebraic variety X over a field K is determined up to isomorphism by the sheaf of rings of polynomial functions from Zariski open subsets of X into K. • The example we will focus on in this post: a compact Hausdorff space X is determined by the C*-algebra of continuous functions $X \to \mathbb C$. In the above we have always assumed that the ring R was commutative, and in fact a field. As a consequence the algebra A of functions $X \to R$ is also commutative. If we view A as “the same data” as X, then there should be some generalization of the notion of a space that corresponds to when A is noncommutative. I learned how to do this for compact Hausdorff spaces when I took a course on C*-algebras last semester, and I’d like to describe this generalization of the notion of compact Hausdorff space here. Fix a Hilbert space H, and let B(H) be the algebra of bounded linear operators on H. By a C*-algebra acting on H, we mean a complex subalgebra of B(H) which is closed under taking adjoints and closed with respect to the operator norm. In case $H = L^2(X)$ for some measure space X, we will refer to C*-algebras acting on H as C*-algebras on X. In case X is a compact Hausdorff space, there are three natural C*-algebras on X. First is B(H) itself; second is the algebra $B_0(H)$ of compact operators acting on H (in other words, the norm-closure of finite rank operators), and third is the C*-algebra C(X) of continuous functions on X, which acts on H by pointwise multiplication. The algebra C(X) is of course commutative. If A is a commutative, unital C*-algebra and I is a maximal ideal in A, then A/I is a field, and by the Gelfand-Mazur theorem in fact $A/I = \mathbb C$. It follows that the maximal spectrum (i.e. the set of maximal ideals) of A is in natural bijection with the space of continuous, surjective algebra morphisms $A \to \mathbb C$; namely, I corresponds to the projection $A \to A/I$. In particular I is closed. One can show that every such morphism has norm 1, so lies in the unit ball of the dual space A*; and that the limit of a net of morphisms in the weakstar topology is also a morphism. Therefore the weakstar topology of A* restricts to the maximal spectrum of A, which is then a compact Hausdorff space by the Banach-Alaoglu theorem. If A = C(X), then the maximal spectrum of A consists of the ideals $I_x$ of functions f such that $f(x) = 0$, where $x \in X$. The projections $A \to A/I_x$ are exactly of the form $f \mapsto f(x)$. This is the content of the baby Gelfand-Naimark theorem: the maximal spectrum of C(X) is X, and conversely every commutative, unital C*-algebra arises this way. (The great Gelfand-Naimark theorem, on the other hand, guarantees that every C*-algebra, defined as a special kind of ring rather than analytically, is a C*-algebra acting [faithfully] on a Hilbert space as I defined above.) The baby Gelfand-Naimark theorem is far from constructive; the proof of the Banach-Alaoglu theorem requires the axiom of choice, especially when A is not separable. Let us briefly run through the properties of X that we can easily recover from C(X). Continuous maps $X \to Y$ correspond to continuous, unital algebra morphisms $C(Y) \to C(X)$; a map f is sent to the morphism which pulls back functions along f. Points, as noted above, correspond to maximal ideals. If K is a closed subset of X and I is the ideal of functions that vanish on K, then A/I is the “localization” of A at I. (So inclusions of compact Hausdorff spaces correspond to projections of C*-algebras.) If X is just locally compact, then the C*-algebra $C_0(X)$ of continuous functions on X which vanish at the fringe of every compactification of X is not unital (since 1 does not vanish at the fringe), and the unital algebras A which extend $C_0(X)$ correspond to compact Hausdorff spaces that contain X, in a sort of “Galois correspondence”. In fact, the one-point compactification of X corresponds to the minimal unital C*-algebra containing $C_0(X)$, while the Stone-Cech compactification corresponds to the C*-algebra $C_b(X)$ of bounded continuous functions on X. Therefore the compactifications of X correspond to the unital C*-algebras A such that $C_0(X) \subset A \subseteq C_b(X)$, and surjective continuous functions which preserve the compactification structure, say $Z \to Y$, correspond to the inclusion maps $C(Y) \to C(Z)$. Two examples of this phenomenon: • The C*-algebra $C_b(\mathbb N) = \ell^\infty$ has a maximal spectrum whose points are exactly the ultrafilters on $\mathbb N$. In particular $\ell^\infty/c_0$ has a maximal spectrum whose points are exactly the free ultrafilters. This is why we needed the axiom of choice so badly. • The compactification of $\mathbb R$ obtained as the C*-algebra generated by $C_0(\mathbb R)$ and $\theta \mapsto e^{i\theta}$ is a funny-looking curve in $\mathbb R^3$. Try drawing it! (As a hint: think of certain spaces which are connected but not path-connected…) If $X = (X, d)$ is a metric space, then we can recover the metric d from its Lipschitz seminorm L. This is the seminorm $Lf = \sup_{x_1, x_2} |f(x_1)-f(x_2)|/d(x_1,x_2)$, which is finite exactly if f is Lipschitz. One then has $d(x_1, x_2) = \sup_{Lf \leq 1} |f(x_1) - f(x_2)|$. The Lipschitz seminorm also satisfies the identities $L1 = 0$, and $L(fg) \leq ||f||_\infty Lg + ||g||_\infty Lf$, the latter being known as the Leibniz axiom. A seminorm $\rho$ satisfying these properties gives rise to a metric on X, and if the resulting topology on X is actually the topology of X, we say that $\rho$ is a Lip-norm. Finally, if $E \to X$ is a continuous vector bundle, then let $\Gamma(E)$ denote the vector space of continuous sections of E. Then $\Gamma(E)$ is a projective module over C(X), and Swan’s theorem says that every projective module arises this way. Moreover, the module structure of $\Gamma(E)$ determines E up to isomorphism. So now we have all we need to consider noncommutative compact Hausdorff spaces. By such a thing, I just mean a unital C*-algebra A, which I will call X when I am thinking of it as a space. Since A is noncommutative, we cannot appeal to the Gelfand-Mazur theorem, and in fact the notion of a maximal ideal doesn’t quite make sense. The correct generalization of maximal ideal is known as a “primitive ideal”: an ideal I is primitive if I is the annihilator of a simple A-module. (The only simple A-module is $\mathbb C$ if A is commutative, so in that case I is maximal.) The primitive spectrum of A admits a Zariski topology, which coincides with the weakstar topology when A is commutative. So the points of X will consist of primitive ideals of A. Metrics on X will be Lip-norms; vector bundles will be projective modules. What of functions? We already know that elements of A should be functions on X. But what is their codomain? Clearly not a field — they aren’t commutative! If I is a primitive ideal corresponding to a point x, we let $A/I$ be the localization at x. This will be some C*-algebra, which is a simple module $A_x$ that we view as the ring that functions send x into. (So this construction may send different points into different rings — this isn’t so different from algebraic geometry, however, where localization at an integer n sends points of $\text{Spec} \mathbb Z$ into the ring $\mathbb Z/n$.) Matrix rings are simple, so often $A_x$ will be a matrix ring. Let’s run through an example of a noncommutative space. Let T be the unit circle in $\mathbb R^2$. The group $\mathbb Z/2$ acts on T by $(x, y) \mapsto (x, -y)$. This corresponds to an action on the C*-algebra C(T) by $f(x, y) \mapsto f(x, -y)$. Whenever a group G acts on a C*-algebra B, we can define a semidirect product $B \rtimes G$, and so we let our C*-algebra A be the semidirect product $C(T) \rtimes \mathbb Z/2$. It turns out that A is the C*-algebra generated by unitary operators that form a group isomorphic to $\mathbb Z/2 * \mathbb Z/2$, where $*$ is the coproduct of groups. We may express elements of A as f + gb, where b is the nontrivial element of $\mathbb Z/2$ and $f,g \in C(T)$; A is a C*-algebra acting on L^2(T) by $(f + gb)\xi(x, y) = f(x, y)\xi(x, y) + g(x, y)\xi(x, -y)$, for any $\xi \in L^2(T)$. The center Z(A) of A, therefore, consists of $f \in C(T)$ such that $f(x, y) = f(x, -y)$. Therefore Z(A) is isomorphic to the C*-algebra $C([-1, 1])$, where $f \in Z(A)$ is sent to $\tilde f(x) = f(x, \pm y)$ (where $x^2 + y^2 = 1$, and the choice of sign does not matter by assumption on f). The spectrum of Z(A) is easy to compute, therefore: it is just [-1, 1]! For every $x \in [-1, 1]$, let $A_x = A/I_x$ be “localization at x”, where $I_x$ is the ideal generated by $f \in Z(A)$ which vanish at x. Now let $\cos \theta = x$; one then has a morphism of C*-algebras $A_x \to \mathbb C^{2 \times 2}$ by $f + gb \mapsto \frac{1}{2}\begin{bmatrix}f(e^{i\theta})&g(e^{i\theta})\\g(e^{-i\theta})&f(e^{-i\theta})\end{bmatrix}$. This morphism is always injective, and if $x \in (-1, 1)$, it is also surjective. Therefore $A_x = \mathbb C^{2 \times 2}$ in that case. Since $\mathbb C^{2 \times 2}$ is a simple ring, it follows that the spectrum of A contains x. But something funny happens at the end-points, corresponding to the fact that $(\pm 1, 0)$ were fixed points of $\mathbb Z/2$. Since $f(e^{i\theta}) = f(e^{-i\theta})$ in that case and similarly for g, $A_x$ is isomorphic to the 2-dimensional subalgebra of $\mathbb C^{2 \times 2}$ consisting of symmetric matrices with a doubled diagonal entry. This is not a simple module, and in fact projects onto $\mathbb C$ in two different ways; therefore there are two points in the spectrum of A corresponding to each of $\pm 1$! Thus the primitive ideal space X of A is not Hausdorff; in fact it looks like [-1, 1], except that the end-points are doubled. This is similar to the “bug-eyed line” phenomenon in algebraic geometry. What is the use of such a space? Well, Qiaochu Yuan describes a proof (that I believe is due to Marc Rieffel), using this space X, that if B is any C*-algebra and $p, q \in B$ are projections such that $||p - q|| < 1$, then there is a unitary operator u such that $pu = uq$ at MathOverflow. The idea is that p, q actually project onto generators of the C*-algebra $A = C(T) \rtimes \mathbb Z/2$, using the fact that A is also generated by $\mathbb Z/2 * \mathbb Z/2$. As a consequence, in any separable C*-algebra, there are only countably many projections. Thus one may view the above discussion as a very overcomplicated, long-winded proof of the fact that $B(\ell^2)$ is not separable. # An ergodic take on the random graph I wanted to learn about the random graph, which is normally an object studied in model theory, but I know very little model theory. Since I’m taking a course on ergodic theory this term, I figured now would be the time to figure out the basic properties of the random graph from an analyst’s perspective. (I’m sure none of the ideas here are original, though I haven’t seem them written down in this presentation before.) We will be considering undirected graphs without loops, at most one edge per pair of vertices, and one vertex for each natural number. We want to think of such graphs as being encoded by real numbers, but since multiple decimal expansions encode the same real number, this will prove problematic. So instead, we will encode them using infinite binary strings. Let C be the usual Cantor set of infinite binary strings. If 2 denotes the 2-element set then C is a product of countably many copies of 2. We define a probability measure mu on C. The topology of C is generated by cylinders C_y, where y ranges over finite binary strings. If y is length l, then $\mu(C_y) = 2^{-\ell}$. (Actually, we could replace mu by any Bernoulli measure, but this is slightly cleaner.) This measure mu extends to the Borel sets of C. Fix your favorite bijection F from N to the set X of pairs (n, m) of natural numbers such that $n < m$. F lifts to a bijection between the power sets of N and X. Now the power set of N is C, while the power set of X, say G, is nothing more than the set of possible adjacency relations that could give N the structure of a graph. Pushing mu forward along F, we obtain a probability space G = (G, mu) whose outcomes are graphs on N! Moreover, to specify such a graph, we just need to choose a point in the Cantor set. But wait, this probability space is highly uniform. That is, if we pick two elements of G at random, then almost surely they will be isomorphic. To see this, we show that almost surely, every finite graph embeds into a randomly chosen element of G. To see that the graph G on N such that every countable graph embeds uniquely into G is unique up to isomorphism, let G’ be another such graph. Since both graphs are defined on N, there is a natural ordering on their vertices. We construct an isomorphism between G and G’ in countably many stages. Assume inductively that at stage 2n – 1, we have constructed isomorphic finite subgraphs H_{2n – 1} and H_{2n – 1}’ of G and G’ respectively, with 2n – 1 elements. At stage 2n, let g be the least element of G that does not appear in H_{2n – 1} and let H_{2n} be the union of g with H_{2n – 1}. Then H_{2n} is a finite graph, so is isomorphic to a subgraph H_{2n}’ of G’, which can be chosen to contain H_{2n – 1}’. Then at stage 2n + 1, let g’ be the least element of G’ which does not appear in H_{2n}’, adjoin it to get a graph H_{2n + 1}’, and find an isomorphism to a subgraph H_{2n + 1} of G. The H_n and H_n’ eventually exhaust G and G’, respectively, so give rise to an isomorphism between G and G’. Now, to see that every finite graph embeds almost surely. It suffices to show that every finite binary string appears in a randomly chosen infinite binary string almost surely. To do this, say that an infinite binary string x is normal if 0 appears in x with frequency 1/2. If x is normal, then any finite word of length l appears in x with frequency $2^{-\ell}$, and in particular appears at least once. So it suffices to show that almost every infinite binary string is normal. This follows by the ergodic theorem. Let T be the shift map on C (so $T(x_1x_2x_3\dots) = x_2x_3x_4\dots$.) Clearly T is measure-preserving. Now define a Borel function $f: C \to 2$ by projection onto the first factor: $f(x_1x_2x_3\dots) = x_1$. Then $f(T^n(x_1x_2x_3\dots)) = x_{n+1}$ and obviously the expected value of f is 1/2. So by the ergodic theorem, the frequency of 0 in x is the expected value of f, so we are done.
2021-01-21 05:50:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1824, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9536546468734741, "perplexity": 154.723630887784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522242.73/warc/CC-MAIN-20210121035242-20210121065242-00023.warc.gz"}
https://chemistry.stackexchange.com/questions/293/is-there-a-theory-behind-selecting-elements-that-may-be-successful-in-potential/352
# Is there a theory behind selecting elements that may be successful in potential superconductors? Looking at something like $\ce{YBa2Cu3O7}$ which was one of the first cuprate superconductors to be discovered, I'm always curious how the selection of these substances as likely superconductors comes about? Does it have to do with optimal crystalline structure, unpaired electrons, or is it similar to the way semiconductors are designed with doping to create "holes"? Or are these superconductors simply determined experimentally based on small changes of what has worked in the past? As a side question: I've always been curious as to how they stumbled upon Yttrium for this particular case. It's not something I think of as being carried in ready supply. Would this have been after YAG lasers were already around?
2019-09-22 08:49:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3703233301639557, "perplexity": 714.2086834578932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575402.81/warc/CC-MAIN-20190922073800-20190922095800-00274.warc.gz"}
http://wims.univ-savoie.fr/wims/wims.cgi?lang=en&+module=U1%2Fanalysis%2Ftafcalc.en
# MVT calc --- Introduction --- Recall the Mean Value Theorem: If $f:\left[a,b\right]\to ℝ$ is a function continuous on the interval [ a, b ] and differentiable on the open interval $\right]a,b\left[$, then there exists a point $c\in \right]a,b\left[$ such that $f\prime \left(c\right)=\frac{f\left(b\right)-f\left(a\right)}{b-a}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}.$ In the exercise MVT calc, the server presents you such a function $f$ as well as an interval $\left[a,b\right]$, and your goal is to find effectively a point $c$ which satisfies the above equation. Note that this point $c$ is not necessarily unique. Choose the suitable difficulty level: 1 . 2 . 3 . 4 . 5 . 6 . 7 . 8 . Other exercises on: Mean Value Theorem   Functions   Derivative   Continuity   Differentiability In order to access WIMS services, you need a browser supporting forms. In order to test the browser you are using, please type the word wims here: and press Enter''.
2020-02-19 11:44:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4334585964679718, "perplexity": 474.28106269269773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144111.17/warc/CC-MAIN-20200219092153-20200219122153-00384.warc.gz"}
https://math.stackexchange.com/questions/3185844/determinant-of-a-matrix-with-positive-diagonal-entries-is-greater-than-1/3187210
# Determinant of a matrix with positive diagonal entries is greater than 1 Let $$A$$ be a $$n\times n$$ matrix with entries on its diagonal are positive and other entries are negative with sum of entries in every column is 1. Prove that $$\det(A) > 1.$$ I got no idea to begin with. Any suggestion or hint? By your assumptions, $$A^T$$ is a matrix with positive entries on the diagonal and negative off diagonal entries such that each row sums to 1. Let $$B$$ denote any matrix satisfying these conditions, we'll prove that $$\det(B)>1$$. First, notice that $$B$$ has an eigenvector $$(1,\dots,1)$$ with eigenvalue 1. Now suppose $$\textbf{x}=(x_1,\dots,x_n)$$ is an eigenvector with complex entries, not all the same, and eigenvalue $$\lambda$$, we'll show that $$|\lambda|>1$$. Suppose $$x_i$$ has maximal modulus among the entries of $$\textbf{x}$$ and suppose WLOG that $$x_i>0$$ (otherwise just multiply by the appropriate phase). Now we know that $$|\lambda x_i|=|\sum_{j\neq i}b_{i,j}x_j+b_{i,i}x_i| \geq |b_{i,i}x_i|-\sum_{j\neq i}|b_{i,j}x_j| = b_{i,i}x_i+\sum_{j\neq i}b_{i,j}|x_j| \geq \sum_{j=1}^n b_{i,j}x_i=x_i$$ But one of the above inequalities must be a strong inequality (the first one if $$x_j$$ all have the same modulus and the second if not). This implies that $$|\lambda|>1$$, as desired.
2019-05-19 22:27:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9864760041236877, "perplexity": 67.57290755423641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255182.37/warc/CC-MAIN-20190519221616-20190520003616-00373.warc.gz"}
https://amininima.wordpress.com/2013/04/30/yapp-part-ii/
## YAPP – Part II In previous post “YAPP (Yet Another Prisoners Problem)” we considered the problem of 100 prisoners each opening 50 out of 100 boxes, surviving iff all 100 prisoners found their unique id in one of the 50 boxes. Remarkably there was a strategy enabling the prisoners to survive with probability > 30%. Even more remarkably we established that the survival probability continues to stay over 30% as the number of prisoners and boxes tend to infinity. In this post I wanted to consider a variation of the problem, similar spirit, yet seemingly more difficult than the previous. Let’s state the problem I have in mind (in it’s full generality): Problem n condemned prisoners awaiting execution are offered a chance to live (for unknown reasons). The prison administration has labelled each prisoner with a unique number id from 1 to n. Each of the prisoners are led alone into a room with 2n closed boxes each containing a number from 1 to n in some random arrangement. The multiplicity of each id number is exactly 2 (that is, there are exactly two boxes containing the same id). Each prisoner is allowed to open n of the 2n boxes. The deal is: If all n prisoners manage to find their number id in some box after opening n boxes then they may all live. Otherwise they will all die. The prisoners may communicate a strategy with eachother before they enter the room, but not after. They are not allowed to manipulate the boxes nor leave messages to following prisoners. All boxes are closed when a prisoner enters the room. The question is: What strategy should the prisoners choose to maximize their chance of surviving? Let’s first make a few easy remarks: • The independent probability of all prisoners surviving is now $(\frac{3}{4})^n$, as each prisoners individual chance of success is $\frac{3}{4}$. To see this, each person chooses $n$ out of $2n$ boxes, so the probability that the first of the two instances of id $k$ is not among the $n$ boxes is $\frac{1}{2}$. The same goes for the second instance of id $k$. Therefore the probability that they are both not among the $n$ chosen boxes is $\frac{1}{4}$ i.e the probability that at least one is chosen is $\frac{3}{4}$. Selections are independent so total survival probability among $n$ prisoners is hence $(\frac{3}{4})^n.$ • When we compare the problems, we compare the settings where we have n prisoners over n boxes in the original problem vs n prisoners over 2n boxes in this problem. • The previous strategy no longer works as we are no longer dealing with classical permutations. Because of the double multiplicity of each number, paths may self-intersect prematurely (below picture illustrates this phenomenon). A few Questions: 1. Is this problem more or less hard than the previous problem? After all, we have twice as many boxes, but on the other hand each prisoner has the opportunity to find their number in two boxes instead of one and is allowed to open twice as many boxes. 2. Can we salvage anything from the previous strategy, i.e “can we fix the argument?” 3. Is there a correspondence between the two problems? (can we translate a configuration (or a group of configurations) in one problem to a configuration in the second problem?) 4. What is the total number of configurations (permutations) in this problem? 5. If we can find an alternative argument, how would we go about proving optimality? (can we adapt/reuse the proof by Curtin and Warshauer(see previous post)?) I should probably mention at this point that I have unfortunately not managed to find an argument that works as well as the argument for the previous problem (please feel free to comment if you have any nice ideas). Nevertheless I have a partial remediation of the previous strategy that is somewhat better than the default independent strategy. I will begin discussing this strategy in what remains of this post. Let’s first look at the easiest question – Question 4. The number of ways of permuting $n$ distinct numbers is as familiar $n!$ In this problem we are dealing with a so called multiset of n distinct numbers, each of multiplicity 2. In general a multiset ${a_1,...,a_n}$ with multiplicities $m_1,...,m_k$ can be permuted in $\left( \begin{array}{c} n \\ m_1,...,m_k \end{array} \right) = \frac{n!}{m_1!....m_k!}$ ways. To see this just note that there are $n!$ ways of permuting ${a_1,...,a_n}$ as distinct objects. In each rearrangement we can permute the objects with multiplicity $m_i$ in $m_i!$ ways among eachother, each leading to the same rearrangement. Therefore there are only $\frac{1}{m_i!}$ as many true rearrangements. The same argument inductively for each $i=1,...,k$ establishes the formula. In this problem we therefore have  $\left( \begin{array}{c} 2n \\ 2,...,2 \end{array} \right) = \frac{(2n)!}{2^n}$ different permutations. As we might have expected this space is vastly larger than the space of $n!$ permutations in the previous problem. Let’s get to the proposed strategy. As we saw, the obvious issue with pointer following in the new problem setting is that paths no longer partition into disjoint cycles. The paths have a tendency to curl back into themselves leading to degenerate cycles. To straighten out the curls we can impose a new rule: If we encounter number $k$ for the second time, then visit box number $n+k$. What are the implications of this rule? We certainly won’t visit each box more than once, since each number has multiplicity 2. As the rule implies, box $k$ is visited the first time number $k$ is encountered, then box $n+k$ the second time and thereafter we will never see number $k$ again, so both boxes are henceforth off limits. Because of this fact the path will eventually lead to the discovery of every number id (of course we will stop once we have discovered the number id we are looking for). This is somewhat good news, we have a procedure for exploiting structure in the permutation. We are interested in the proportion of permutations s.t every path taken by the prisoners have length at most $n$ under the rules of the strategy. The next natural question is if we can quantify how well this strategy ought to work? In the previous problem we counted the number of permutations $N_{k,n}$ containing a $k$-cycle ($k > \frac{n}{2}$) by indirectly expanding the following recursion: $N_{i,j} = \begin{cases} jN_{i-1,j-1}, & if\;\; i > 1 \\ (j-1)! & if \;\;i=1 \end{cases}$ In this problem we have more choice. Let $M_{i,j,k}$ denote the number of permutations including a cycle of length $i$ where $j$ denotes #(remaining numbers with multiplicity 1) and $k$ denotes #(remaining numbers with multiplicity 2). At each stage, we can either choose a number with multiplicity 2 (i.e a new number) or we choose a number of multiplicity 1 (i.e a number already discovered). Finally once we have our cycle of length $i$ we can permute the remaining multiset at will. This leads to the following recursion: $M_{i,j,k} = \begin{cases} jM_{i-1,j-1,k} + kM_{i-1,j+1,k-1}, & if\;\; i > 1 \\ \left( \begin{array}{c} j+(k-1) \\ 1,...,1,2,...,2 \end{array} \right) & if \;\;i=1 \end{cases}$ We are interested in $M_{r,0,n} \;\; (r > n)$. Unfortunately unwinding this recursion into a closed form expression looks too difficult (don’t hesitate to let me know if you can think of a nice combinatorial argument for counting k-cycles). Things are also made further complicated by the fact that cycles of length greater than $n$ need not be unique occurrences in any given permutation, making the probability of failure difficult to evaluate. Resorting to Monte-Carlo methods we obtain the following simulated survival curve. Unfortunately it seems the combinatorial explosion of long cycles take over fairly quickly, rendering the strategy far from ideal. Below you can try running a few simulations yourself, using the Flash implementation hacked together for this purpose. (Click here if you see a blank space area below) I’ll admit defeat for now and come back to this problem again some time in the future. Exercise 1 (How many words? – Easy) How many words (not necessarily in any language dictionary) can you form of the string “PRISONERSPROBLEM”? (Hint: Think of multisets) Exercise 2 (An Optimal Strategy – (Possibly) Hard) Find an optimal solution to the second Prisoners Problem (or even a good quantified strategy for that matter). Advertisements This entry was posted in Probability, Uncategorized and tagged , , , , , , , , . Bookmark the permalink.
2018-01-16 16:53:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 45, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8050734996795654, "perplexity": 530.6682939396561}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886476.31/warc/CC-MAIN-20180116164812-20180116184812-00510.warc.gz"}
http://mathhelpforum.com/advanced-algebra/45071-amalgams.html
## amalgams Can u help me to prove this. Determine the number of isomorphism types of amalgams (S_n, S_n, S_(n-1). (Using the Goldschmidt's lemma.)
2014-03-17 07:26:28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9338058233261108, "perplexity": 11674.419930088578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678704953/warc/CC-MAIN-20140313024504-00049-ip-10-183-142-35.ec2.internal.warc.gz"}
http://openstudy.com/updates/55b57113e4b039df908d8f17
## anonymous one year ago can someone tell me if im right • This Question is Open 1. anonymous In the figure, ΔABC ~ ΔDEF. Solve for x. Triangles ABC and DEF. Angles B and E are congruent and measure 160 degrees. AB measures x. BC measures 16. DE measures 3. EF measures 8. x = 1.5 x = 6 x = 4 x = 5.5 2. anonymous 1.5 is what i got 3. anonymous 4. DanJS k.. 5. DanJS ~ means similar right.. 6. anonymous ya 7. DanJS ok i think you mixed it up a bit... 8. anonymous how 9. DanJS since the triangles are similar, the ratios of similar sides are the same... 10. anonymous so that means it would have to be 5.5 11. DanJS $\frac{ DE }{ AB } = \frac{ EF }{ BC }$ 12. anonymous so i was right because it would equal and 5.5 shows that 13. DanJS $\frac{ 3 }{ x } = \frac{ 8 }{ 16 }$ 14. anonymous im lost 15. DanJS the ratios of DE/AB has to be the same as EF/BC , since they are similar triangles 16. DanJS just have to fill in the numbers 17. anonymous well now i got 6 i multiplied then divided by 100 18. DanJS yep, x=6 19. anonymous yesss 20. anonymous thats cross multiplying 21. anonymous thank you 22. DanJS welcome
2017-01-21 01:18:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6461731791496277, "perplexity": 6208.018956051633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00232-ip-10-171-10-70.ec2.internal.warc.gz"}