url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://answers.ros.org/question/11447/kinect-not-detected/?sort=oldest
# Kinect not detected Hi, I previously posted an issue regarding errors in openni drivers installation (http://answers.ros.org/question/2297/openni-drivers-installation-errors). With the answer to this question i was able to install the drivers successfully. Then i ran the openni_camera node's openni_node.launch file, which says the devices is not connected. aravindhan@rrc-laptop:~$roslaunch openni_camera openni_node.launch ... logging to /home/aravindhan/.ros/log/a30405ca-f056-11e0-83fc-5c260a051546/roslaunch-rrc-laptop-22434.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://rrc-laptop:42441/ SUMMARY ======== PARAMETERS * /rosdistro * /openni_node1/use_indices * /openni_node1/depth_registration * /openni_node1/image_time_offset * /openni_node1/depth_frame_id * /openni_node1/depth_mode * /openni_node1/debayering * /rosversion * /openni_node1/projector_depth_baseline * /openni_node1/rgb_frame_id * /openni_node1/depth_rgb_translation * /openni_node1/depth_time_offset * /openni_node1/image_mode * /openni_node1/shift_offset * /openni_node1/device_id * /openni_node1/depth_rgb_rotation NODES / openni_node1 (openni_camera/openni_node) kinect_base_link (tf/static_transform_publisher) kinect_base_link1 (tf/static_transform_publisher) kinect_base_link2 (tf/static_transform_publisher) kinect_base_link3 (tf/static_transform_publisher) auto-starting new master process[master]: started with pid [22450] ROS_MASTER_URI=http://localhost:11311 setting /run_id to a30405ca-f056-11e0-83fc-5c260a051546 process[rosout-1]: started with pid [22463] started core service [/rosout] process[openni_node1-2]: started with pid [22473] process[kinect_base_link-3]: started with pid [22476] process[kinect_base_link1-4]: started with pid [22477] process[kinect_base_link2-5]: started with pid [22478] process[kinect_base_link3-6]: started with pid [22479] [ INFO] [1317931571.886741090]: [/openni_node1] No devices connected.... waiting for devices to be connected [ INFO] [1317931572.887058521]: [/openni_node1] No devices connected.... waiting for devices to be connected [ INFO] [1317931573.887393253]: [/openni_node1] No devices connected.... waiting for devices to be connected [ INFO] [1317931574.887772578]: [/openni_node1] No devices connected.... waiting for devices to be connected [ INFO] [1317931575.888150279]: [/openni_node1] No devices connected.... waiting for devices to be connected What possibly could be wrong here? Thanks in advance, Karthik edit retag close merge delete ## Comments Please post the output of "lsusb" ( 2011-10-06 10:47:49 -0600 )edit ## 5 Answers Sort by » oldest newest most voted Reinstalled Ubuntu 11.04 (32bit) installed ROS electric and everything else. Now started getting the pointcloud from Kinect in rviz. Hope this is not the solution to the question but since I couldn't find much help on this, I had to go this way. So this is one of the solution to the problem. Thanks Karthik more Does it work better if you start each node separately? first roscore then, in another terminal rosrun openni_camera openni_node more ## Comments No its not working that way either. ( 2011-10-06 10:12:34 -0600 )edit Does the device show up if you do lsusb in terminal (add -v or -vv for more verbose output)? ( 2011-10-06 11:39:08 -0600 )edit Same problem here with ubuntu 10.04, it seems to be an issue with the libusb-1.0-0-dev or the openni-dev libraries... The solution was to re-install the dependecies of the openni_kinect stack First unistall the OpenNI developer package$ sudo apt-get remove openni-dev libusb-1.0-0-dev and reinstall it afterwards $sudo apt-get install openni-dev libusb-1.0-0-dev Then recompile the package usign the --rosdep-install and --pre-clean options$ rosmake openni_ros --rosdep-install --pre-clean Let me know if it works for you... regards.. Mario more thanks mario; but i am not able to check it now as things are working fine for me after the new installation. ( 2011-12-14 06:59:32 -0600 )edit I've had this problem before and solved it by killing the XnSensorServer. killall XnSensorServer more Adding my own notes from troubleshooting... Sometimes it can also be a power problem. Check to make sure the connect turlebot adapter (or wall adapter) is plugged in and the green light on the connector is on. Verify the icreate base is on. Try an lsusb. You should see 3 devices from "Microsoft Corp". If you only see one, the kinect may not have full power. more
2022-01-19 18:02:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17114904522895813, "perplexity": 4110.802245890674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301475.82/warc/CC-MAIN-20220119155216-20220119185216-00528.warc.gz"}
http://www.ni.com/documentation/en/labview/2.0/node-ref/inverse-sine/
# Inverse Sine (G Dataflow) Computes the arcsine of a specified value (x) in radians. If x is not complex and is less than -1 or greater than 1, the result is not a number (NaN). ## x An input to this operation. This input supports scalar numbers, arrays or clusters of numbers, and arrays of clusters of numbers. Data Type Changes on FPGA When you add this node to a document targeted to an FPGA, this input has a default data type that uses fewer hardware resources at compile time. ## arcsin Result of the operation. This output assumes the same numeric representation as x. When x is of the form x = a + b i, that is, when x is complex, the following equation defines arcsin: $\mathrm{arcsin}\left(x\right)=-i\text{\hspace{0.17em}}ln\left(ix+\sqrt{1-{x}^{2}}\right)$ Where This Node Can Run: Desktop OS: Windows FPGA: This product does not support FPGA devices Web Server: Supported in VIs that run in a web application
2018-12-13 16:34:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6637279987335205, "perplexity": 2185.27837033958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824912.16/warc/CC-MAIN-20181213145807-20181213171307-00526.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/krm.2020052
American Institute of Mathematical Sciences January  2021, 14(1): 149-174. doi: 10.3934/krm.2020052 Collisional sheath solutions of a bi-species Vlasov-Poisson-Boltzmann boundary value problem Université de Nantes, CNRS UMR 6629, Laboratoire de Mathématiques Jean Leray, 2, rue de la Houssinière, 44332 Nantes Received  April 2020 Revised  July 2020 Published  November 2020 The mathematical description of the interaction between a collisional plasma and an absorbing wall is a challenging issue. In this paper, we propose to model this interaction by considering a stationary bi-species Vlasov-Poisson-Boltzmann boundary value problem with boundary conditions that are consistent with the physics. In particular, we show that the wall potential can be uniquely determined from the ambipolarity of the particles flows as the unique solution of a nonlinear equation. We also prove that it is an increasing function of the electrons re-emission coefficient at the wall. Based on the Schauder fixed point theorem, our analysis establishes the existence of a solution provided, on the one hand that the incoming ions density satisfies a moment condition that generalizes the Historical Bohm criterion, and on the other hand that the collision frequency does not exceed a critical value whose definition is subordinated to the strict validity of our generalized Bohm criterion. Citation: Mehdi Badsi. Collisional sheath solutions of a bi-species Vlasov-Poisson-Boltzmann boundary value problem. Kinetic & Related Models, 2021, 14 (1) : 149-174. doi: 10.3934/krm.2020052 References: show all references References: Schematic characteristic ions trajectories associated with a decreasing potential $\phi$. The solid lines corresponds to characteristic curves originating from $x = 0$ with positive velocities, and they span $D_{i,0}$ the lighter gray region. The dashed lines correspond to characteristic curves originating from the wall with negative velocities, and they span the darker gray region $D_{i,1}$ Schematic characteristic electrons trajectories associated with a decreasing potential $\phi$. The solid lines corresponds to characteristic curves originating from $x = 0$ with positive velocities, and they span $D_{e,0}$ the lighter gray region. The dashed lines correspond to characteristic curves originating from the wall with negative velocities, and they span the darker gray region $D_{e,1}$ [1] Hai-Liang Li, Tong Yang, Mingying Zhong. Diffusion limit of the Vlasov-Poisson-Boltzmann system. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021003 [2] Hao Wang. Uniform stability estimate for the Vlasov-Poisson-Boltzmann system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 657-680. doi: 10.3934/dcds.2020292 [3] Yulia O. Belyaeva, Björn Gebhard, Alexander L. Skubachevskii. A general way to confined stationary Vlasov-Poisson plasma configurations. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021004 [4] Tong Yang, Seiji Ukai, Huijiang Zhao. Stationary solutions to the exterior problems for the Boltzmann equation, I. Existence. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 495-520. doi: 10.3934/dcds.2009.23.495 [5] Pierre Baras. A generalization of a criterion for the existence of solutions to semilinear elliptic equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 465-504. doi: 10.3934/dcdss.2020439 [6] Thomas Y. Hou, Dong Liang. Multiscale analysis for convection dominated transport equations. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 281-298. doi: 10.3934/dcds.2009.23.281 [7] Bopeng Rao, Zhuangyi Liu. A spectral approach to the indirect boundary control of a system of weakly coupled wave equations. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 399-414. doi: 10.3934/dcds.2009.23.399 [8] Juan Pablo Pinasco, Mauro Rodriguez Cartabia, Nicolas Saintier. Evolutionary game theory in mixed strategies: From microscopic interactions to kinetic equations. Kinetic & Related Models, 2021, 14 (1) : 115-148. doi: 10.3934/krm.2020051 [9] Pierluigi Colli, Gianni Gilardi, Gabriela Marinoschi. Solvability and sliding mode control for the viscous Cahn–Hilliard system with a possibly singular potential. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020051 [10] Shiqiu Fu, Kanishka Perera. On a class of semipositone problems with singular Trudinger-Moser nonlinearities. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020452 [11] Gunther Uhlmann, Jian Zhai. Inverse problems for nonlinear hyperbolic equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 455-469. doi: 10.3934/dcds.2020380 [12] Kazunori Matsui. Sharp consistency estimates for a pressure-Poisson problem with Stokes boundary value problems. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1001-1015. doi: 10.3934/dcdss.2020380 [13] Mokhtar Bouloudene, Manar A. Alqudah, Fahd Jarad, Yassine Adjabi, Thabet Abdeljawad. Nonlinear singular $p$ -Laplacian boundary value problems in the frame of conformable derivative. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020442 [14] Anna Canale, Francesco Pappalardo, Ciro Tarantino. Weighted multipolar Hardy inequalities and evolution problems with Kolmogorov operators perturbed by singular potentials. Communications on Pure & Applied Analysis, 2021, 20 (1) : 405-425. doi: 10.3934/cpaa.2020274 [15] Sabine Hittmeir, Laura Kanzler, Angelika Manhart, Christian Schmeiser. Kinetic modelling of colonies of myxobacteria. Kinetic & Related Models, 2021, 14 (1) : 1-24. doi: 10.3934/krm.2020046 [16] Noriyoshi Fukaya. Uniqueness and nondegeneracy of ground states for nonlinear Schrödinger equations with attractive inverse-power potential. Communications on Pure & Applied Analysis, 2021, 20 (1) : 121-143. doi: 10.3934/cpaa.2020260 [17] Li Cai, Fubao Zhang. The Brezis-Nirenberg type double critical problem for a class of Schrödinger-Poisson equations. Electronic Research Archive, , () : -. doi: 10.3934/era.2020125 [18] Qiwei Wu, Liping Luan. Large-time behavior of solutions to unipolar Euler-Poisson equations with time-dependent damping. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021003 [19] Yahia Zare Mehrjerdi. A new methodology for solving bi-criterion fractional stochastic programming. Numerical Algebra, Control & Optimization, 2020  doi: 10.3934/naco.2020054 [20] Reza Chaharpashlou, Abdon Atangana, Reza Saadati. On the fuzzy stability results for fractional stochastic Volterra integral equation. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020432 2019 Impact Factor: 1.311
2021-01-16 12:59:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5682911276817322, "perplexity": 4589.569834633159}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506640.22/warc/CC-MAIN-20210116104719-20210116134719-00325.warc.gz"}
https://gmatclub.com/forum/a-train-left-a-station-p-at-6-am-and-reached-another-station-q-at-11-a-277308.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 18 Oct 2019, 09:25 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # A train left a station P at 6 am and reached another station Q at 11 a new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 58453 A train left a station P at 6 am and reached another station Q at 11 a  [#permalink] ### Show Tags 26 Sep 2018, 05:10 11 00:00 Difficulty: 55% (hard) Question Stats: 68% (02:44) correct 32% (03:06) wrong based on 94 sessions ### HideShow timer Statistics A train left a station P at 6 am and reached another station Q at 11 am. Another train left station Q at 7 am and reached P at 10 am. At what time did the two trains pass one another? (A) 7:50 am (B) 8:13 am (C) 8:30 am (D) 8:42 am (E) 9:03 am _________________ VP Joined: 31 Oct 2013 Posts: 1464 Concentration: Accounting, Finance GPA: 3.68 WE: Analyst (Accounting) A train left a station P at 6 am and reached another station Q at 11 a  [#permalink] ### Show Tags 26 Sep 2018, 14:13 Bunuel wrote: A train left a station P at 6 am and reached another station Q at 11 am. Another train left station Q at 7 am and reached P at 10 am. At what time did the two trains pass one another? (A) 7:50 am (B) 8:13 am (C) 8:30 am (D) 8:42 am (E) 9:03 am For the 1st train total time = 11 - 6 = 5 hrs For the second train , total time = 10 -7 = 3 hrs. ***** speed and distance are not given . Assign variables for distance. let the distance be x km. Average speed of 1st train = x/5 kmh Average speed of 2nd train = x / 3 kmh ***the point the trains meet is equal to total distance. let they meet y hrs after 6 am. y*(x/5) + y*(x-1/3) = y...............x - 1 . second train starts its journey 1 hr later. So this train covered less distance. x/5 + x-1/3 = 1 3x + 5x - 5 = 15 8x = 20 x = 20/8 x = 5/2 x = 2 hrs 30 mins. So 8.30 is our time both train meet. (6 +2.30) = 8.30 am. The best answer is C. Senior SC Moderator Joined: 22 May 2016 Posts: 3554 A train left a station P at 6 am and reached another station Q at 11 a  [#permalink] ### Show Tags 26 Sep 2018, 18:13 4 Bunuel wrote: A train left a station P at 6 am and reached another station Q at 11 am. Another train left station Q at 7 am and reached P at 10 am. At what time did the two trains pass one another? (A) 7:50 am (B) 8:13 am (C) 8:30 am (D) 8:42 am (E) 9:03 am Assign a distance. Then use the "gap" and relative speed approach Distance? Call the trains B and C. They travel the same distance. B takes 5 hours (from 6 to 11 a.m.) C takes 3 hours (from 7 to 10 a.m.) Let distance between P and Q = 15 miles (LCM of 5 and 3) Use each train's time to find rates. (1) Each train's rate? $$r*t=D$$ B, rate: $$(r*5hrs)=15mi$$ B, rate: $$r=\frac{15mi}{5hrs}=3$$ mph C, rate: $$(r*3hrs)=15mi$$ C, rate: $$r=\frac{15mi}{3hrs}=5$$ mph (2) Distance "gap" between B and C? B is at station P C is at station Q Initial distance = 15 miles But B travels alone for 1 hour -- B covers $$(3mph*1hr)=3$$ miles -- B shortens the distance between them from 15 to 12 miles That 12 miles is a "gap." When B and C at relative speed cover the distance, they close that gap and pass one another. (3) Relative rate/speed? Opposite directions (towards or away): ADD rates Relative rate/speed: $$(3+5)=8$$ mph (2) Time required for them to pass? $$R*T=D$$, so $$T=\frac{D}{R}$$ $$T=\frac{12mi}{8mph}=\frac{3}{2}=1.5$$ hours (4) Clock time at which they pass one another? Calculate clock time from the start time of the second train (then both trains are moving and thus closing the gap together at their relative/combined speed) Train C left at 7:00 a.m. 1.5 hours later is 8:30 a.m. _________________ SC Butler has resumed! Get two SC questions to practice, whose links you can find by date, here. Instructions for living a life. Pay attention. Be astonished. Tell about it. -- Mary Oliver Target Test Prep Representative Status: Founder & CEO Affiliations: Target Test Prep Joined: 14 Oct 2015 Posts: 8104 Location: United States (CA) Re: A train left a station P at 6 am and reached another station Q at 11 a  [#permalink] ### Show Tags 02 Oct 2018, 19:13 Bunuel wrote: A train left a station P at 6 am and reached another station Q at 11 am. Another train left station Q at 7 am and reached P at 10 am. At what time did the two trains pass one another? (A) 7:50 am (B) 8:13 am (C) 8:30 am (D) 8:42 am (E) 9:03 am We can let t = the number of hours the second train traveled before passing the first train and d = the distance between the two stations. We see that the rate of the first train is d/5 and that of the second train is d/3, and we can create the equation: d/5 x (t + 1) = d/3 x t Dividing by d we have: 1/5 x (t + 1) = t/3 Multiplying by 15 we have: 3(t + 1) = 5t 3t + 3 = 5t 3 = 2t 3/2 = 1.5 = t So the trains passed each other at 8:30am. Alternate Solution: We can let t = the time of the second train and d = the distance between the two stations. We note that the rate of the first train is d/5 and the rate of the second train is d/3. At 7:00am, the first train had been traveling for one hour; thus, it traveled a distance of d/5 x 1 = d/5. Therefore, at 7:00am, the distance between the two trains is d - d/5 = 4d/5. Since the two trains are traveling towards each other, the distance between the two trains is decreasing by d/5 + d/3 = 8d/15 miles each hour. Thus, a distance of 4d/5 will be covered in (4d/5)/(8d/15) = 3/2 = 1.5 hours. Thus, the two trains will meet at 7:00am + 1.5 hours = 8:30am. _________________ # Scott Woodbury-Stewart Founder and CEO Scott@TargetTestPrep.com 122 Reviews 5-star rated online GMAT quant self study course See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews If you find one of my posts helpful, please take a moment to click on the "Kudos" button. Intern Joined: 20 Aug 2018 Posts: 9 Re: A train left a station P at 6 am and reached another station Q at 11 a  [#permalink] ### Show Tags 18 May 2019, 00:13 ScottTargetTestPrep wrote: Bunuel wrote: A train left a station P at 6 am and reached another station Q at 11 am. Another train left station Q at 7 am and reached P at 10 am. At what time did the two trains pass one another? (A) 7:50 am (B) 8:13 am (C) 8:30 am (D) 8:42 am (E) 9:03 am We can let t = the number of hours the second train traveled before passing the first train and d = the distance between the two stations. We see that the rate of the first train is d/5 and that of the second train is d/3, and we can create the equation: d/5 x (t + 1) = d/3 x t Dividing by d we have: 1/5 x (t + 1) = t/3 Multiplying by 15 we have: 3(t + 1) = 5t 3t + 3 = 5t 3 = 2t 3/2 = 1.5 = t So the trains passed each other at 8:30am. Alternate Solution: We can let t = the time of the second train and d = the distance between the two stations. We note that the rate of the first train is d/5 and the rate of the second train is d/3. At 7:00am, the first train had been traveling for one hour; thus, it traveled a distance of d/5 x 1 = d/5. Therefore, at 7:00am, the distance between the two trains is d - d/5 = 4d/5. Since the two trains are traveling towards each other, the distance between the two trains is decreasing by d/5 + d/3 = 8d/15 miles each hour. Thus, a distance of 4d/5 will be covered in (4d/5)/(8d/15) = 3/2 = 1.5 hours. Thus, the two trains will meet at 7:00am + 1.5 hours = 8:30am. Hey ScottTargetTestPrep, I was hoping if you could share the logic behind this equation d/5 x (t + 1) = d/3 x t.. Since this would mean that the distance travelled by the 2 trains in (t+1) and t hrs respectively would be equal, I'm assuming that reaching this conclusion would require some more calculations or probably a shortcut if you could share Senior Manager Joined: 04 Aug 2010 Posts: 475 Schools: Dartmouth College A train left a station P at 6 am and reached another station Q at 11 a  [#permalink] ### Show Tags 18 May 2019, 04:14 1 Bunuel wrote: A train left a station P at 6 am and reached another station Q at 11 am. Another train left station Q at 7 am and reached P at 10 am. At what time did the two trains pass one another? (A) 7:50 am (B) 8:13 am (C) 8:30 am (D) 8:42 am (E) 9:03 am Let the distance between P and Q = 5 miles. Since the first train takes 5 hours (from 6-11am) to travel the 5-mile distance between P and Q, the rate for the first train $$= \frac{d}{t} = \frac{5}{5} = 1$$ mph.: Traveling at 1 mph from 6-7am, the first train covers 1 mile of the 5-mile distance between P and Q, leaving 4 miles between the two trains. Time and rate have a RECIPROCAL RELATIONSHIP. Whereas the first train takes 5 hours to travel between P and Q (from 6am to 11am), the second train takes 3 hours (from 7am to 10am). Since the TIME RATIO for the two trains $$= \frac{first}{second} = \frac{5}{3}$$, the RATE RATIO for the two trains $$= \frac{first}{second} = \frac{3}{5}$$. The rate ratio implies the following: Of every 8 feet traveled when the two trains work together to meet, the first train travels 3 feet, while the second train travels 5 feet. Thus, the first train travels $$\frac{3}{8}$$ of the remaining 4 miles: $$\frac{3}{8}*4 = 1.5$$ miles Since the rate for the first train = 1 mph, the time for the first train to travel 1.5 miles after 7am to meet the second train $$= \frac{d}{r} = \frac{1.5}{1} = 1.5$$ hours. Thus, the time at which the first train meets the second train = 7am + 1.5 hours = 8:30am. _________________ GMAT and GRE Tutor Over 1800 followers GMATGuruNY@gmail.com New York, NY If you find one of my posts helpful, please take a moment to click on the "Kudos" icon. Available for tutoring in NYC and long-distance. Manager Joined: 09 Nov 2015 Posts: 134 Re: A train left a station P at 6 am and reached another station Q at 11 a  [#permalink] ### Show Tags 19 May 2019, 02:10 P______R____________M___________________Q Let the distance between P and Q be 'd' kms. The times taken by the 1st train(T1) and the 2nd train(T2) to cover 'd' are 5 and 3 hours respectively. By the time T2 started from Station Q, T1 had already been travelling for 1 hr and had thus covered (d/5)kms leaving (4d/5)kms between them. Let R denote the point which T1 had reached when T2 starts from Station Q. So at 7am, T1 and T2 start from R and Q respectively towards each other and meet at Point M covering a distance of (4d/5) kms between them. Since, the ratio of the the speeds of T1 and T2 is 3:5, T2 covers (5/8)ths of RQ(4d/5)= (4d/5)*(5/8)=(d/2)kms to reach the meeting point M. Since, T2 covers 'd' kms in 3 hrs, it will reach M 1.5 hrs after it starts, i.e at 8:30 am. Ans: C Director Joined: 19 Oct 2018 Posts: 976 Location: India Re: A train left a station P at 6 am and reached another station Q at 11 a  [#permalink] ### Show Tags 19 May 2019, 05:39 let the distance = lcm(3,5)=15kms speed of first train,P=3 kmph speed of second train,Q=5kmph Distance traveled by train P from 6AM-7AM=3km Relative distance between two trains at 7AM = 15-3=12 Relative speed= 5+3=8 Time taken to meet= 12/8=1.5hrs They will meet each other at 8:30AM (7+1hr30min) Re: A train left a station P at 6 am and reached another station Q at 11 a   [#permalink] 19 May 2019, 05:39 Display posts from previous: Sort by # A train left a station P at 6 am and reached another station Q at 11 a new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne
2019-10-18 16:25:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6378130912780762, "perplexity": 3397.067893391532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684226.55/warc/CC-MAIN-20191018154409-20191018181909-00341.warc.gz"}
https://bookdown.org/egarpor/NP-EAFIT/reg-param.html
## 3.1 Review on parametric regression We review now a couple of useful parametric regression models that will be used in the construction of nonparametric regression models. ### 3.1.1 Linear regression Model formulation and least squares The multiple linear regression employs multiple predictors $$X_1,\ldots,X_p$$9 for explaining a single response $$Y$$ by assuming that a linear relation of the form \begin{align} Y=\beta_0+\beta_1 X_1+\ldots+\beta_p X_p+\varepsilon \tag{3.1} \end{align} holds between the predictors $$X_1,\ldots,X_p$$ and the response $$Y$$. In (3.1), $$\beta_0$$ is the intercept and $$\beta_1,\ldots,\beta_p$$ are the slopes, respectively. $$\varepsilon$$ is a random variable with mean zero and independent from $$X_1,\ldots,X_p$$. Another way of looking at (3.1) is \begin{align} \mathbb{E}[Y|X_1=x_1,\ldots,X_p=x_p]=\beta_0+\beta_1x_1+\ldots+\beta_px_p, \tag{3.2} \end{align} since $$\mathbb{E}[\varepsilon|X_1=x_1,\ldots,X_p=x_p]=0$$. Therefore, the mean of $$Y$$ is changing in a linear fashion with respect to the values of $$X_1,\ldots,X_p$$. Hence the interpretation of the coefficients: • $$\beta_0$$: is the mean of $$Y$$ when $$X_1=\ldots=X_p=0$$. • $$\beta_j$$, $$1\leq j\leq p$$: is the additive increment in mean of $$Y$$ for an increment of one unit in $$X_j=x_j$$, provided that the remaining variables do not change. Figure 3.1 illustrates the geometrical interpretation of a multiple linear model: a plane in the $$(p+1)$$-dimensional space. If $$p=1$$, the plane is the regression line for simple linear regression. If $$p=2$$, then the plane can be visualized in a three-dimensional plot. The estimation of $$\beta_0,\beta_1,\ldots,\beta_p$$ is done by minimizing the so-called Residual Sum of Squares (RSS). First we need to introduce some helpful matrix notation: • A sample of $$(X_1,\ldots,X_p,Y)$$ is denoted by $$(X_{11},\ldots,X_{1p},Y_1),\allowbreak \ldots,(X_{n1},\ldots,X_{np},Y_n)$$, where $$X_{ij}$$ denotes the $$i$$-th observation of the $$j$$-th predictor $$X_j$$. We denote with $$\mathbf{X}_i=(X_{i1},\ldots,X_{ip})$$ to the $$i$$-th observation of $$(X_1,\ldots,X_p)$$, so the sample simplifies to $$(\mathbf{X}_{1},Y_1),\ldots,(\mathbf{X}_{n},Y_n)$$. • The design matrix contains all the information of the predictors and a column of ones \begin{align*} \mathbf{X}=\begin{pmatrix} 1 & X_{11} & \cdots & X_{1p}\\ \vdots & \vdots & \ddots & \vdots\\ 1 & X_{n1} & \cdots & X_{np} \end{pmatrix}_{n\times(p+1)}. \end{align*} • The vector of responses $$\mathbf{Y}$$, the vector of coefficients $$\boldsymbol\beta$$ and the vector of errors are, respectively, \begin{align*} \mathbf{Y}=\begin{pmatrix} Y_1 \\ \vdots \\ Y_n \end{pmatrix}_{n\times 1},\quad\boldsymbol\beta=\begin{pmatrix} \beta_0 \\ \beta_1 \\ \vdots \\ \beta_p \end{pmatrix}_{(p+1)\times 1},\text{ and } \boldsymbol\varepsilon=\begin{pmatrix} \varepsilon_1 \\ \vdots \\ \varepsilon_n \end{pmatrix}_{n\times 1}. \end{align*} Thanks to the matrix notation, we can turn the sample version of the multiple linear model, namely \begin{align*} Y_i&=\beta_0 + \beta_1 X_{i1} + \ldots +\beta_p X_{ik} + \varepsilon_i,\quad i=1,\ldots,n, \end{align*} into something as compact as \begin{align*} \mathbf{Y}=\mathbf{X}\boldsymbol\beta+\boldsymbol\varepsilon. \end{align*} The RSS for the multiple linear regression is \begin{align} \text{RSS}(\boldsymbol\beta):=&\,\sum_{i=1}^n(Y_i-\beta_0-\beta_1X_{i1}-\ldots-\beta_pX_{ik})^2\nonumber\\ =&\,(\mathbf{Y}-\mathbf{X}\boldsymbol{\beta})'(\mathbf{Y}-\mathbf{X}\boldsymbol{\beta}).\tag{3.3} \end{align} The RSS aggregates the squared vertical distances from the data to a regression plane given by $$\boldsymbol\beta$$. Note that the vertical distances are considered because we want to minimize the error in the prediction of $$Y$$. Thus, the treatment of the variables is not symmetrical10; see Figure 3.2. The least squares estimators are the minimizers of the RSS: \begin{align*} \hat{\boldsymbol{\beta}}:=\arg\min_{\boldsymbol{\beta}\in\mathbb{R}^{p+1}} \text{RSS}(\boldsymbol{\beta}). \end{align*} Luckily, thanks to the matrix form of (3.3), it is simple to compute a closed-form expression for the least squares estimates: \begin{align} \hat{\boldsymbol{\beta}}=(\mathbf{X}'\mathbf{X})^{-1}\mathbf{X}'\mathbf{Y}.\tag{3.4} \end{align} Exercise 3.1 $$\hat{\boldsymbol{\beta}}$$ can be obtained differentiating (3.3). Prove it using that $$\frac{\partial \mathbf{A}\mathbf{x}}{\partial \mathbf{x}}=\mathbf{A}$$ and $$\frac{\partial f(\mathbf{x})'g(\mathbf{x})}{\partial \mathbf{x}}=f(\mathbf{x})'\frac{\partial g(\mathbf{x})}{\partial \mathbf{x}}+g(\mathbf{x})'\frac{\partial f(\mathbf{x})}{\partial \mathbf{x}}$$ for two vector-valued functions $$f$$ and $$g$$. Figure 3.2: The least squares regression plane $$y=\hat\beta_0+\hat\beta_1x_1+\hat\beta_2x_2$$ and its dependence on the kind of squared distance considered. Application also available here. Let’s check that indeed the coefficients given by R’s lm are the ones given by (3.4) in a toy linear model. # Create the data employed in Figure 3.1 # Generates 50 points from a N(0, 1): predictors and error set.seed(34567) x1 <- rnorm(50) x2 <- rnorm(50) x3 <- x1 + rnorm(50, sd = 0.05) # Make variables dependent eps <- rnorm(50) # Responses yLin <- -0.5 + 0.5 * x1 + 0.5 * x2 + eps yQua <- -0.5 + x1^2 + 0.5 * x2 + eps yExp <- -0.5 + 0.5 * exp(x2) + x3 + eps # Data dataAnimation <- data.frame(x1 = x1, x2 = x2, yLin = yLin, yQua = yQua, yExp = yExp) # Call lm # lm employs formula = response ~ predictor1 + predictor2 + ... # (names according to the data frame names) for denoting the regression # to be done mod <- lm(yLin ~ x1 + x2, data = dataAnimation) summary(mod) ## ## Call: ## lm(formula = yLin ~ x1 + x2, data = dataAnimation) ## ## Residuals: ## Min 1Q Median 3Q Max ## -2.37003 -0.54305 0.06741 0.75612 1.63829 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.5703 0.1302 -4.380 6.59e-05 *** ## x1 0.4833 0.1264 3.824 0.000386 *** ## x2 0.3215 0.1426 2.255 0.028831 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.9132 on 47 degrees of freedom ## Multiple R-squared: 0.276, Adjusted R-squared: 0.2452 ## F-statistic: 8.958 on 2 and 47 DF, p-value: 0.0005057 # mod is a list with a lot of information # str(mod) # Long output # Coefficients modcoefficients ## (Intercept) x1 x2 ## -0.5702694 0.4832624 0.3214894 # Application of formula (3.4) # Matrix X X <- cbind(1, x1, x2) # Vector Y Y <- yLin # Coefficients beta <- solve(t(X) %*% X) %*% t(X) %*% Y beta ## [,1] ## -0.5702694 ## x1 0.4832624 ## x2 0.3214894 Exercise 3.2 Compute $$\boldsymbol{\beta}$$ for the regressions yLin ~ x1 + x2, yQua ~ x1 + x2, and yExp ~ x2 + x3 using equation (3.4) and the function lm. Check that the fitted plane and the coefficient estimates are coherent. Once we have the least squares estimates $$\hat{\boldsymbol{\beta}}$$, we can define the next two concepts: • The fitted values $$\hat Y_1,\ldots,\hat Y_n$$, where \begin{align*} \hat Y_i:=\hat\beta_0+\hat\beta_1X_{i1}+\cdots+\hat\beta_pX_{ik},\quad i=1,\ldots,n. \end{align*} They are the vertical projections of $$Y_1,\ldots,Y_n$$ into the fitted line (see Figure 3.2). In a matrix form, inputting (3.3) \begin{align*} \hat{\mathbf{Y}}=\mathbf{X}\hat{\boldsymbol{\beta}}=\mathbf{X}(\mathbf{X}'\mathbf{X})^{-1}\mathbf{X}'\mathbf{Y}=\mathbf{H}\mathbf{Y}, \end{align*} where $$\mathbf{H}:=\mathbf{X}(\mathbf{X}'\mathbf{X})^{-1}\mathbf{X}'$$ is called the hat matrix because it “puts the hat into $$\mathbf{Y}$$”. What it does is to project $$\mathbf{Y}$$ into the regression plane (see Figure 3.2). • The residuals (or estimated errors) $$\hat \varepsilon_1,\ldots,\hat \varepsilon_n$$, where \begin{align*} \hat\varepsilon_i:=Y_i-\hat Y_i,\quad i=1,\ldots,n. \end{align*} They are the vertical distances between actual data and fitted data. Model assumptions Up to now, we have not made any probabilistic assumption on the data generation process. $$\hat{\boldsymbol{\beta}}$$ was derived from purely geometrical arguments, not probabilistic ones. However, some probabilistic assumptions are required for inferring the unknown population coefficients $$\boldsymbol{\beta}$$ from the sample $$(\mathbf{X}_1, Y_1),\ldots,(\mathbf{X}_n, Y_n)$$. The assumptions of the multiple linear model are: 1. Linearity: $$\mathbb{E}[Y|X_1=x_1,\ldots,X_p=x_p]=\beta_0+\beta_1x_1+\ldots+\beta_px_p$$. 2. Homoscedasticity: $$\mathbb{V}\text{ar}[\varepsilon_i]=\sigma^2$$, with $$\sigma^2$$ constant for $$i=1,\ldots,n$$. 3. Normality: $$\varepsilon_i\sim\mathcal{N}(0,\sigma^2)$$ for $$i=1,\ldots,n$$. 4. Independence of the errors: $$\varepsilon_1,\ldots,\varepsilon_n$$ are independent (or uncorrelated, $$\mathbb{E}[\varepsilon_i\varepsilon_j]=0$$, $$i\neq j$$, since they are assumed to be normal). A good one-line summary of the linear model is the following (independence is assumed) \begin{align} Y|(X_1=x_1,\ldots,X_p=x_p)\sim \mathcal{N}(\beta_0+\beta_1x_1+\ldots+\beta_px_p,\sigma^2).\tag{3.5} \end{align} Inference on the parameters $$\boldsymbol\beta$$ and $$\sigma$$ can be done, conditionally11 on $$X_1,\ldots,X_n$$, from (3.5). We do not explore this further, referring the interested reader to these notes. Instead of that, we remark the connection between least squares estimation and the maximum likelihood estimator derived from (3.5). First, note that (3.5) is the population version of the linear model (it is expressed in terms of the random variables, not in terms of their samples). The sample version that summarizes assumptions i–iv is \begin{align*} \mathbf{Y}|(\mathbf{X}_1,\ldots,\mathbf{X}_p)\sim \mathcal{N}_n(\mathbf{X}\boldsymbol{\beta},\sigma^2\mathbf{I}). \end{align*} Using this result, it is easy obtain the log-likelihood function of $$Y_1,\ldots,Y_n$$ conditionally12 on $$\mathbf{X}_1,\ldots,\mathbf{X}_n$$ as \begin{align} \ell(\boldsymbol{\beta})=\log\phi_{\sigma^2\mathbf{I}}(\mathbf{Y}-\mathbf{X}\boldsymbol{\beta})=\sum_{i=1}^n\log\phi_{\sigma}(Y_i-(\mathbf{X}\boldsymbol{\beta})_i).\tag{3.6} \end{align} Finally, the next result justifies the consideration of the least squares estimate: it equals the maximum likelihood estimator derived under assumptions i–iv. Theorem 3.1 Under assumptions i–iv, the maximum likelihood estimate of $$\boldsymbol{\beta}$$ is the least squares estimate (3.4): \begin{align*} \hat{\boldsymbol{\beta}}_\mathrm{ML}=\arg\max_{\boldsymbol{\beta}\in\mathbb{R}^{p+1}}\ell(\boldsymbol{\beta})=(\mathbf{X}'\mathbf{X})^{-1}\mathbf{X}\mathbf{Y}. \end{align*} Proof. Expanding the first equality at (3.6) gives ($$|\sigma^2\mathbf{I}|^{1/2}=\sigma^{n}$$) \begin{align*} \ell(\boldsymbol{\beta})=-\log((2\pi)^{n/2}\sigma^n)-\frac{1}{2\sigma^2}(\mathbf{Y}-\mathbf{X}\boldsymbol{\beta})'(\mathbf{Y}-\mathbf{X}\boldsymbol{\beta}). \end{align*} Optimizing $$\ell$$ does not require knowledge on $$\sigma^2$$, since differentiating with respect to $$\boldsymbol{\beta}$$ and equating to zero gives (see Exercise 3.1) $$\frac{1}{\sigma^2}(\mathbf{Y}-\mathbf{X}\boldsymbol{\beta})'\mathbf{X}=0$$. The result follows from that. ### 3.1.2 Logistic regression Model formulation When the response $$Y$$ can take only two values, codified for convenience as $$1$$ (success) and $$0$$ (failure), it is called a binary variable. A binary variable, known also as a Bernoulli variable, is a $$\mathrm{B}(1, p)$$. Recall that $$\mathbb{E}[\mathrm{B}(1, p)]=\mathbb{P}[\mathrm{B}(1, p)=1]=p$$. If $$Y$$ is a binary variable and $$X_1,\ldots,X_p$$ are predictors associated to $$Y$$, the purpose in logistic regression is to estimate \begin{align} p(x_1,\ldots,x_p):=&\,\mathbb{P}[Y=1|X_1=x_1,\ldots,X_p=x_p]\nonumber\\ =&\,\mathbb{E}[Y|X_1=x_1,\ldots,X_p=x_p],\tag{3.7} \end{align} this is, how the probability of $$Y=1$$ is changing according to particular values, denoted by $$x_1,\ldots,x_p$$, of the predictors $$X_1,\ldots,X_p$$. A tempting possibility is to consider a linear model for (3.7), $$p(x_1,\ldots,x_p)=\beta_0+\beta_1x_1+\ldots+\beta_px_p$$. However, such a model will run into serious problems inevitably: negative probabilities and probabilities larger than one will arise. A solution is to consider a function to encapsulate the value of $$z=\beta_0+\beta_1x_1+\ldots+\beta_px_p$$, in $$\mathbb{R}$$, and map it back to $$[0,1]$$. There are several alternatives to do so, based on distribution functions $$F:\mathbb{R}\longrightarrow[0,1]$$ that deliver $$y=F(z)\in[0,1]$$. Different choices of $$F$$ give rise to different models, the most common being the logistic distribution function: \begin{align*} \mathrm{logistic}(z):=\frac{e^z}{1+e^z}=\frac{1}{1+e^{-z}}. \end{align*} Its inverse, $$F^{-1}:[0,1]\longrightarrow\mathbb{R}$$, known as the logit function, is \begin{align*} \mathrm{logit}(p):=\mathrm{logistic}^{-1}(p)=\log\frac{p}{1-p}. \end{align*} This is a link function, this is, a function that maps a given space (in this case $$[0,1]$$) into $$\mathbb{R}$$. The term link function is employed in generalized linear models, which follow exactly the same philosophy of the logistic regression – mapping the domain of $$Y$$ to $$\mathbb{R}$$ in order to apply there a linear model. As said, different link functions are possible, but we will concentrate here exclusively on the logit as a link function. The logistic model is defined as the following parametric form for (3.7): \begin{align} p(x_1,\ldots,x_p)&=\mathrm{logistic}(\beta_0+\beta_1x_1+\ldots+\beta_px_p)\nonumber\\ &=\frac{1}{1+e^{-(\beta_0+\beta_1x_1+\ldots+\beta_px_p)}}.\tag{3.8} \end{align} The linear form inside the exponent has a clear interpretation: • If $$\beta_0+\beta_1x_1+\ldots+\beta_px_p=0$$, then $$p(x_1,\ldots,x_p)=\frac{1}{2}$$ ($$Y=1$$ and $$Y=0$$ are equally likely). • If $$\beta_0+\beta_1x_1+\ldots+\beta_px_p<0$$, then $$p(x_1,\ldots,x_p)<\frac{1}{2}$$ ($$Y=1$$ less likely). • If $$\beta_0+\beta_1x_1+\ldots+\beta_px_p>0$$, then $$p(x_1,\ldots,x_p)>\frac{1}{2}$$ ($$Y=1$$ more likely). To be more precise on the interpretation of the coefficients $$\beta_0,\ldots,\beta_p$$ we need to introduce the concept of odds. The odds is an equivalent way of expressing the distribution of probabilities in a binary variable. Since $$\mathbb{P}[Y=1]=p$$ and $$\mathbb{P}[Y=0]=1-p$$, both the success and failure probabilities can be inferred from $$p$$. Instead of using $$p$$ to characterize the distribution of $$Y$$, we can use \begin{align} \mathrm{odds}(Y)=\frac{p}{1-p}=\frac{\mathbb{P}[Y=1]}{\mathbb{P}[Y=0]}.\tag{3.9} \end{align} The odds is the ratio between the probability of success and the probability of failure. It is extensively used in betting due to its better interpretability. For example, if a horse $$Y$$ has a probability $$p=2/3$$ of winning a race ($$Y=1$$), then the odds of the horse is \begin{align*} \text{odds}=\frac{p}{1-p}=\frac{2/3}{1/3}=2. \end{align*} This means that the horse has a probability of winning that is twice larger than the probability of losing. This is sometimes written as a $$2:1$$ or $$2 \times 1$$ (spelled “two-to-one”). Conversely, if the odds of $$Y$$ is given, we can easily know what is the probability of success $$p$$, using the inverse of (3.9): \begin{align*} p=\mathbb{P}[Y=1]=\frac{\text{odds}(Y)}{1+\text{odds}(Y)}. \end{align*} For example, if the odds of the horse were $$5$$, that would correspond to a probability of winning $$p=5/6$$. Remark. Recall that the odds is a number in $$[0,+\infty]$$. The $$0$$ and $$+\infty$$ values are attained for $$p=0$$ and $$p=1$$, respectively. The log-odds (or logit) is a number in $$[-\infty,+\infty]$$. We can rewrite (3.8) in terms of the odds (3.9). If we do so, we have: \begin{align*} \mathrm{odds}(Y|&X_1=x_1,\ldots,X_p=x_p)\\ &=\frac{p(x_1,\ldots,x_p)}{1-p(x_1,\ldots,x_p)}\\ &=e^{\beta_0+\beta_1x_1+\ldots+\beta_px_p}\\ &=e^{\beta_0}e^{\beta_1x_1}\ldots e^{\beta_px_p}. \end{align*} This provides the following interpretation of the coefficients: • $$e^{\beta_0}$$: is the odds of $$Y=1$$ when $$X_1=\ldots=X_p=0$$. • $$e^{\beta_j}$$, $$1\leq j\leq k$$: is the multiplicative increment of the odds for an increment of one unit in $$X_j=x_j$$, provided that the remaining variables do not change. If the increment in $$X_j$$ is of $$r$$ units, then the multiplicative increment in the odds is $$(e^{\beta_j})^r$$. Model assumptions and estimation Some probabilistic assumptions are required for performing inference on the model parameters $$\boldsymbol\beta$$ from the sample $$(\mathbf{X}_1, Y_1),\ldots,(\mathbf{X}_n, Y_n)$$. These assumptions are somehow simpler than the ones for linear regression. The assumptions of the logistic model are the following: 1. Linearity in the logit13: $$\mathrm{logit}(p(\mathbf{x}))=\log\frac{ p(\mathbf{x})}{1-p(\mathbf{x})}=\beta_0+\beta_1x_1+\ldots+\beta_px_p$$. 2. Binariness: $$Y_1,\ldots,Y_n$$ are binary variables. 3. Independence: $$Y_1,\ldots,Y_n$$ are independent. A good one-line summary of the logistic model is the following (independence is assumed) \begin{align*} Y|(X_1=x_1,\ldots,X_p=x_p)&\sim\mathrm{Ber}\left(\mathrm{logistic}(\beta_0+\beta_1x_1+\ldots+\beta_px_p)\right)\\ &=\mathrm{Ber}\left(\frac{1}{1+e^{-(\beta_0+\beta_1x_1+\ldots+\beta_px_p)}}\right). \end{align*} Since $$Y_i\sim \mathrm{Ber}(p(\mathbf{X}_i))$$, $$i=1,\ldots,n$$, the log-likelihood of $$\boldsymbol{\beta}$$ is \begin{align} \ell(\boldsymbol{\beta})=&\,\sum_{i=1}^n\log\left(p(\mathbf{X}_i)^{Y_i}(1-p(\mathbf{X}_i))^{1-Y_i}\right)\nonumber\\ =&\,\sum_{i=1}^n\left\{Y_i\log(p(\mathbf{X}_i))+(1-Y_i)\log(1-p(\mathbf{X}_i))\right\}.\tag{3.10} \end{align} Unfortunately, due to the non-linearity of the optimization problem there are no explicit expressions for $$\hat{\boldsymbol{\beta}}$$. These have to be obtained numerically by means of an iterative procedure, which may run into problems in low sample situations with perfect classification. Unlike in the linear model, inference is not exact from the assumptions, but approximate in terms of maximum likelihood theory. We do not explore this further and refer the interested reader to these notes. Figure 3.5: The logistic regression fit and its dependence on $$\beta_0$$ (horizontal displacement) and $$\beta_1$$ (steepness of the curve). Recall the effect of the sign of $$\beta_1$$ in the curve: if positive, the logistic curve has an ‘s’ form; if negative, the form is a reflected ‘s’. Application also available here. Figure 3.5 shows how the log-likelihood changes with respect to the values for $$(\beta_0,\beta_1)$$ in three data patterns. The data of the illustration has been generated with the following code: # Create the data employed in Figure 3.4 # Data set.seed(34567) x <- rnorm(50, sd = 1.5) y1 <- -0.5 + 3 * x y2 <- 0.5 - 2 * x y3 <- -2 + 5 * x y1 <- rbinom(50, size = 1, prob = 1 / (1 + exp(-y1))) y2 <- rbinom(50, size = 1, prob = 1 / (1 + exp(-y2))) y3 <- rbinom(50, size = 1, prob = 1 / (1 + exp(-y3))) # Data dataMle <- data.frame(x = x, y1 = y1, y2 = y2, y3 = y3) Let’s check that indeed the coefficients given by R’s glm are the ones that maximize the likelihood of the animation of Figure 3.5. We do so for y ~ x1. # Call glm # glm employs formula = response ~ predictor1 + predictor2 + ... # (names according to the data frame names) for denoting the regression # to be done. We need to specify family = "binomial" to make a # logistic regression mod <- glm(y1 ~ x, family = "binomial", data = dataMle) summary(mod) ## ## Call: ## glm(formula = y1 ~ x, family = "binomial", data = dataMle) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -2.47853 -0.40139 0.02097 0.38880 2.12362 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -0.1692 0.4725 -0.358 0.720274 ## x 2.4282 0.6599 3.679 0.000234 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 69.315 on 49 degrees of freedom ## Residual deviance: 29.588 on 48 degrees of freedom ## AIC: 33.588 ## ## Number of Fisher Scoring iterations: 6 # mod is a list with a lot of information # str(mod) # Long output # Coefficients modcoefficients ## (Intercept) x ## -0.1691947 2.4281626 # Plot the fitted regression curve xGrid <- seq(-5, 5, l = 200) yGrid <- 1 / (1 + exp(-(mod$coefficients[1] + mod$coefficients[2] * xGrid))) plot(xGrid, yGrid, type = "l", col = 2, xlab = "x", ylab = "y") points(x, y1) Exercise 3.3 For the regressions y ~ x2 and y ~ x3, do the following: • Check that $$\boldsymbol{\beta}$$ is indeed maximizing the likelihood as compared with Figure 3.5. • Plot the fitted logistic curve and compare it with the one in Figure 3.5. 1. Not to confuse with a sample!↩︎ 2. If that was the case, we would consider perpendicular distances, which yield to Principal Component Analysis (PCA).↩︎ 3. We assume that the randomness is on the response only.↩︎ 4. We assume that the randomness is on the response only.↩︎ 5. An equivalent way of stating this assumption is $$p(\mathbf{x})=\mathrm{logistic}(\beta_0+\beta_1x_1+\ldots+\beta_px_p)$$.↩︎
2021-06-14 06:23:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 10, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9619492292404175, "perplexity": 779.1587764980172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611445.13/warc/CC-MAIN-20210614043833-20210614073833-00189.warc.gz"}
http://www.physicsforums.com/forumdisplay.php?f=61&sort=voteavg&order=desc&daysprune=-1&page=98
# Classical Physics - Principles developed before the rise of relativity and quantum mechanics. Mechanics, Electrodynamics, Thermodynamics, Optics... Meta Thread / Thread Starter Last Post Replies Views Views: 2,244 Announcement: Follow us on social media and spread the word about PF! Jan16-12 Before posting anything, please review the Physics Forums Global Guidelines. Again, for some odd reason, people... Apr22-11 05:26 AM ZapperZ 1 47,855 I'm taking a first year physics course and have been having a little trouble with the basics of newtons laws and... May22-13 01:35 AM raopeng 6 723 In circuits, we have no problem saying that the voltage difference between two point is \cos(\omega t), but what does... May21-13 08:20 AM stevenb 8 1,342 I have a few questions pertaining to some concepts in electrostatics, I'd be grateful if someone would help me out. ... May22-13 01:27 PM phymathlover 9 1,082 Hello all, I am working on a science project based upon paper, and I have chosen the affect of friction between two... May21-13 11:38 AM Atham 15 1,032 Hi fellow members!!! I would like to ask a silly question because my basic concepts are quit weak. Since according to... May22-13 06:29 AM adjacent 8 1,266 Yesterday I got fascinated by the camera recordings on the Atlantis Space Shuttle.... May21-13 03:24 PM LastOneStanding 10 848 What is the distinction between EM waves and Sound waves. I know that sound is cyclical vibrations in some sort of... May23-13 06:34 AM phinds 5 689 Say, a body of mass 'm' is thrown at a certain angle with the vertical with certain initial velocity 'u'. The initial... May24-13 09:01 AM s0ft 8 754 Hello Dear Fellows! We all know that light is an electromagnetic wave and also we know that... May20-13 01:41 PM nsaspook 8 705 I have been looking at mass spectrometers, in particular the interactions between the Bf ind of a charged particle in... May21-13 09:03 PM mesa 8 668 Let's say I have a circuit consisting only of a finite number of batteries and resistors, all ideal. Given an... May22-13 07:41 AM stevenb 10 859 Hello Can anyone tell me how the absorption of a polystyrene nanoparticles scales as a function of its diameter. ... May20-13 12:06 PM Excom 2 395 In my physics book there is an example of making a square wave by "simply" summing up a few cosine waves. The book... May20-13 03:58 PM MisterX 5 844 hey all! can anyone please explain where the equation {Gmm_e}/{r^2}={mgR^2/r^2} comes from given $$ma=Gmm_e/r^2$$... May23-13 10:42 PM Zag 8 1,130 I have been looking for a better way to power my electromagnet than a bunch of AA batteries, and am now thinking of... May22-13 02:22 AM CWatters 3 626 Hello, I was wondering if there is a conceptual explanation for when current leads voltage or vice versa for... May22-13 04:43 PM mikeph 3 586 Given some chemical reaction, A+B -> C+D (all are ideal gases) that occures in 298K and 1 atm. why can't I... May21-13 12:07 PM lightarrow 6 559 If you place a wire with a running current in a magnetic field, the magnetic field will excert a force on the wire.... May23-13 09:09 AM BruceW 13 896 I am doing a paper on the physics in Valve's Portal and got interested in the "Long Fall Boots" that prevent any... May22-13 01:58 AM CWatters 2 1,125 Virtual work principle states: δW = \sum^{N}_{i=1}\vec{F}_{i}\centerdot δ\vec{r}_{i} And from this, we can see... May21-13 02:46 AM Astrum 2 523 I'm kind of rushing through first year physics cause im doing it self paced, but I am realizing now that as an... May20-13 06:43 PM Jorriss 2 344 I have a question about solenoids. The formula for the magnetic field energy density is: ... May22-13 10:08 AM milesyoung 6 843 i) If a wall breaks when it gets hit by a cannonball, did the wall exert an equal and opposite force on the... May21-13 04:58 PM faldahlawi 3 597 I've been mulling this over all weekend, and I've decided to get some help on this. The problem is writing a function... May20-13 07:25 PM milesyoung 4 619 I'm modeling a single 3D rigid body in preparation for some more complicated modeling in order to gain a better... May22-13 06:56 AM D H 5 1,364 Hello, I am wondering, what is the physical interpretation of the angular frequency of AC voltage? I don't see the... May20-13 08:40 PM BruceW 1 732 Any experts on Griffith's fracture theory? I am studying the subject and I am having hard time finding out if the... May21-13 01:19 AM wonderer_85 0 402 How is it that bubbles form on the bottom of a surface of a pot of boiling water? I think that there is probably... May21-13 10:26 PM klawlor419 9 1,410 Hi. I'm having some difficult in understanding something about the dipole term in a multipole expansion. Griffiths... May21-13 08:09 PM LastOneStanding 1 503 I wanted to mention that this solenoid has many winds over many layers. The thickness of the windings is 2.4 inches... May22-13 08:10 PM eigenstaytes 0 471 Can we convert energy to matter? May21-13 08:49 PM velocity boy 12 1,329 I'm having some difficulty understanding exactly what the difference between the definitions of these values are. As I... May22-13 05:25 AM Suvat99 0 576 Hello everyone, A friend of mine came up with this question in class and I really do not have a good answer. ... May22-13 05:15 PM Zag 4 1,073 A rough sketch of experiment. http://i43.tinypic.com/14t4sk5.png the red dots represent a side view of path... May22-13 04:45 PM tiny-tim 3 547 what are the differences? Every example I find usually has a derivative or integral or some kind of calculus defined... May22-13 06:44 PM HomogenousCow 4 773 For example.... wind turbines are primarily listed by their wattage (1.5MW etc.) Presumably their output is varied... May23-13 11:55 AM marcophys 3 666 My model is a lever on a table top. One arm is horizontal on the table, while the other arm is raised at an angle... May23-13 06:13 PM anothersnail 4 572 Hi ... anyone now how to calculate or the formula of the rudyak-krasnolutski EFFECTIVE potencial ? the effective... May23-13 02:50 PM leaoshark 0 352 Suppose TEM wave in +z normal to a boundary on xy plane at z=0. We know E & H are tangential to the boundary. Let... May24-13 01:43 AM yungman 4 670 Wire A and B, which have the same cross-sectional area are connected in series. There is a p.d. V across the whole... May24-13 12:04 PM tech99 1 331
2014-08-31 10:32:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5626391768455505, "perplexity": 1670.1586769106043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500959239.73/warc/CC-MAIN-20140820021559-00253-ip-10-180-136-8.ec2.internal.warc.gz"}
https://budgetmodel.wharton.upenn.edu/issues/2019/6/5/seven-us-economic-models-project-rapid-growth-of-federal-debt
Top # Seven U.S. Economic Models Project Rapid Growth of Federal Debt By Efraim Berkovich and Jagadeesh Gokhale On May 16, 2019, PWBM participated in a session at the National Tax Association (NTA) 49th Annual Spring Symposium. The session compared overlapping-generations (OLG) models from Penn Wharton Budget Model (PWBM), the Congressional Budget Office (CBO),1 the U.S. Senate Joint Committee on Taxation (JCT),2 EY QUEST,3 Diamond-Zodrow (DZ)4 from the Rice University, Texas, Overlapping Generations USA (OGUSA)5 and the Global Gaidar Model (GGM).6 This NTA session constituted the second round of the OLG modeling meetings organized by the CBO. The goal of these meetings is to learn about the implications of alternative modeling choices for projecting U.S. economic outcomes under pre-specified changes in U.S. fiscal policy. CBO’s presentation is available here. All of the models were executed under very simplified policy and economic assumptions to ensure comparability across models. Besides only focusing on a simple benefit cut, the analysis only reported percent changes in key macroeconomic variables (e.g., percent changes in GDP) rather than levels (e.g., actual GDP) and budgetary impacts (e.g., changes in program costs), consistent with an actual score. Projections differ across models because of alternative ways in which they are constructed and calibrated, especially on the sensitivity of individuals’ choices of how much to work and earn and how much to consume or save over time. PWBM’s projections of macroeconomic variables turned out to be quite close to the CBO’s results in key macroeconomic variables. Each modeling group reports how the economy’s path changes under the alternative policy of a preannounced Old-Age and Survivors Insurance (OASI) benefit cut of one-third beginning in 2031. This simplified and stylized policy change was chosen to fit within the capabilities of all the models. PWBM can model highly complex Social Security reform proposals. In fact, PWBM has modeled the Social Security 2100 Act and options for return Social Security to financial balance. In addition, our Social Security simulator shows the effects of 648 policy combinations. The model comparison metric for each macroeconomic outcome variable is the change in the variable’s time path under the new policy relative to the path projected under current policy. Although the results presented at the symposium were limited to changes in variables, models that are supported by a microsimulation, such as PWBM’s and CBO’s, can also analyze levels of economic indicators. A critical assumption underlying outcome differences across models is how open the U.S. economy is to international capital flows. Comparing Model Projections Figure 1 shows that all except one of the models show that the OASI benefit cut in 2031 reduces the debt-to-GDP ratio by between 55-75 percentage points approaching mid-century. Under PWBM’s model (light blue line in Figure 1), this ratio declines by 55 percentage points (from 285 percent to 230 percent) whereas the decline projected by the CBO (orange line in Figure 1) is 59 percentage points. Figure 1: Change in the Debt-to-GDP Ratio Projections of growth in the capital stock differ significantly across models are seen in Figure 2. The differences mostly arise from alternative international capital flows assumed by different models. The OASI cut policy induces model individuals to consume less and work more, thus generating additional national saving. The more closed the economy is to foreign capital flows, the greater the share of increased saving that is retained within the United States. PWBM assumes that foreigners purchase 40 percent of new debt issued by the federal government each year and provide 40 percent of capital flows needed to equilibrate U.S. interest rates to the world rate. The CBO’s openness assumption is similar to PWBM’s as reflected in similar projection outcomes for growth in the U.S. capital stock. Figure 2: Change in Capital Stocks As seen in Figure 3, different models assume different sensitivities of labor supply by individuals in response to the OASI cut policy. In addition, higher retention of capital within the economy increases wages and thereby elicits a stronger labor supply response to the OASI cut policy. PWBM’s projection of the labor supply response is in the middle of the range of model outcomes. It is smaller than that of the CBO, especially over the long term. Figure 3: Change in Labor Supply Output growth projections follow from labor and capital growth. More capital and a stronger labor response make for a larger increase in output over time in response to the OASI benefit cut policy. PWBM’s projected output growth is less than that of CBO because of our lower projections of growth in the capital stock and labor supply. Figure 4: Change in GDP The NTA session brought forth three critical conclusions: 1. According to all model runs, under current fiscal policies, the U.S. economy appears to be fiscally unsustainable primarily because of rapid increases in national debt projected within the models. 2. Even significant policy adjustments such as an OASI benefit cut by one third after 2031 still leaves the U.S. with sizable debt relative to GDP by mid-century. 3. Projections of how such a policy alters the paths of key macroeconomic variables agree on the direction but not on the magnitudes of outcomes. 1. Congressional Budget Office, “An Overview of CBO’s Life-Cycle Growth Model” (February 2019), https://www.cbo.gov/publication/54985 and Shinichi Nishiyama and Felix Reichling, The Costs to Different Generations of Policies That Close the Fiscal Gap, Working Paper 2015-10 (Congressional Budget Office, December 2015), https://www.cbo.gov/publication/51097.  ↩ 2. Rachel Moore and Brandon Pecoraro, “Modeling the Internal Revenue Code in a Heterogeneous-Agent Framework: An Application to TCJA” (draft, May 2019), https://doi.org/10.2139/ssrn.3367192 and Rachel Moore and Brandon Pecoraro, “Macroeconomic Implications of Modeling the Internal Revenue Code in a Heterogeneous-Agent Framework” (draft, December 2018), https://doi.org/10.2139/ssrn.3193142  ↩ 3. EY QUEST Model, developed by Brandon Pizzola, Robert Carroll, and James Mackie: EY, Analyzing the Macroeconomic Impacts of the Tax Cuts and Jobs Act on the US Economy and Key Industries (2018), https://tinyurl.com/y4fpbjgf (PDF, 2.9 MB).  ↩ 4. Diamond-Zodrow Model: George R. Zodrow and John W. Diamond, “Dynamic Overlapping Generations Computable General Equilibrium Models and the Analysis of Tax Policy: The Diamond-Zodrow Model,” in Peter B. Dixon and Dale W. Jorgensen, eds., Handbook of Computable General Equilibrium Modeling (Elsevier, 2013), vol. 1, pp. 743–813, https://doi.org/10.1016/B978-0-444-59568-3.00011-0.  ↩ 5. OG-USA Model: Richard W. Evans and Jason DeBacker, “OG-USA: Documentation for the Large-Scale Dynamic General Equilibrium Overlapping Generations Model for U.S. Policy Analysis” (November 2018), https://tinyurl.com/y694ljom (PDF, 1.6 MB).  ↩ 6. Global Gaidar Model, developed by Seth Benzell, Maria Kasakova, Laurence Kotlikoff, Guillermo Lagarda, Kristina Nesterova, Victor Ye, and Andrey Zubarev: Seth G. Benzell, Laurence J. Kotlikoff, and Guillermo LaGarda, Simulating Business Cash Flow Taxation: An Illustration Based on the “Better Way” Corporate Tax Reform, Working Paper 23675 (National Bureau of Economic Research, August 2017), https://www.nber.org/papers/w23675.  ↩ Year,PWBM,CBO,JCT,EY,DZ,OGUSA,GGM 2018,0,0,-0.0090,-0.002,0.000683153,0.0085451,-0.0018 2019,-0.001817296,-0.00043586,-0.0038,-0.002,-0.000883303,0.009088321,-0.0025 2020,-0.002107175,-0.00114846,-0.0049,-0.002,-0.001767173,0.010364061,-0.0033 2021,-0.002727837,-0.00218941,-0.0043,-0.002,-0.002719637,0.011699796,-0.0042 2022,-0.003502271,-0.00355514,-0.0058,-0.003,-0.003791984,0.012981108,-0.0053 2023,-0.004812542,-0.00532386,-0.0066,-0.003,-0.005037284,0.014225625,-0.0065 2024,-0.006338599,-0.00749107,-0.0098,-0.004,-0.006488249,0.01552127,-0.0081 2025,-0.008340937,-0.01008102,-0.0144,-0.004,-0.008171496,0.016864144,-0.0098 2026,-0.010960611,-0.01313484,-0.0148,-0.005,-0.01011223,0.018081362,-0.0118 2027,-0.012795497,-0.01669578,-0.0191,-0.006,-0.012336492,0.019292072,-0.0142 2028,-0.015856299,-0.02079704,-0.0231,-0.007,-0.014872398,0.020500215,-0.017 2029,-0.019122668,-0.02546013,-0.0268,-0.008,-0.017751121,0.021719239,-0.0202 2030,-0.0227193,-0.03074516,-0.0302,-0.009,-0.021007152,0.022994115,-0.024 2031,-0.026840366,-0.03669265,-0.0370,-0.028,-0.024679218,-0.000989848,-0.0287 2032,-0.048283272,-0.06004975,-0.0653,-0.049,-0.045481772,-0.025565117,-0.08 2033,-0.070433207,-0.08403,-0.0943,-0.071,-0.067582047,-0.05079439,-0.1354 2034,-0.093398143,-0.10870346,-0.1246,-0.093,-0.091053384,-0.076796932,-0.1958 2035,-0.117051417,-0.1341507,-0.1539,-0.117,-0.115974303,-0.103585966,-0.2621 2036,-0.141315,-0.16045472,-0.1858,-0.143,-0.142431351,-0.131128417,-0.3337 2037,-0.166435919,-0.18771103,-0.2188,-0.169,-0.170519426,-0.15941696,-0.4117 2038,-0.192649581,-0.21602616,-0.2523,-0.196,-0.200342818,-0.18850191,-0.4967 2039,-0.219737696,-0.24551513,-0.2862,-0.225,-0.232016977,-0.21849496,-0.5894 2040,-0.247476964,-0.27631256,-0.3219,-0.254,-0.265517259,-0.249445713,-0.6903 2041,-0.276241592,-0.3085553,-0.3574,-0.285,-0.300973947,-0.281493933,-0.8002 2042,-0.306012178,-0.34239975,-0.3921,-0.317,-0.338526324,-0.314778028,-0.9195 2043,-0.336773047,-0.37801987,-0.4297,-0.350,-0.378342343,-0.34952309,-1.0489 2044,-0.368645423,-0.41561233,-0.4684,-0.385,-0.420612634,-0.386052148,-1.1899 2045,-0.401769824,-0.45540131,-0.5087,-0.422,-0.465554121,-0.424585918,-1.3423 2046,-0.435956657,-0.49763611,-0.5485,-0.459,-0.513416385,-0.46550356,-1.5079 2047,-0.471560584,-0.54260589,-0.5907,-0.499,-0.564475053,-0.509220137,-1.6879 2048,-0.508663259,-0.59064255,-0.6353,-0.540,-0.619222335,-0.556311086,-1.8835 2049,-0.547677816,-0.64213196,-0.6810,-0.583,-0.677285891,-0.495761489,-2.0966 2050,-0.547677816,-0.69752441,-0.7299,-0.627,-0.743092776,0.078701851,-2.3295 Year,PWBM,CBO,JCT,EY,DZ,OGUSA,GGM 2018,0,0.00405095,0.0000,-0.001787746,-0.001676639,-0.002241398,0.021384929 2019,1.88675E-11,0.006651896,-0.0006,-0.001120728,0.000750919,-0.000819137,0.020916335 2020,0.004760008,0.009320146,-0.0011,-0.000481299,0.002283607,1.33619E-05,0.019569472 2021,0.008294739,0.01189044,-0.0012,0.000184092,0.003954317,0.000877794,0.019230769 2022,0.011825338,0.01452677,-0.0010,0.000889403,0.005658137,0.001748805,0.018903592 2023,0.015569176,0.017231348,-0.0006,0.00160248,0.007392626,0.002621852,0.018587361 2024,0.019330572,0.020013839,0.0001,0.002347319,0.009160192,0.003496164,0.017351598 2025,0.02324485,0.022873532,0.0010,0.003118522,0.010967842,0.004406511,0.016157989 2026,0.027442537,0.025818622,0.0020,0.003929701,0.012824493,0.005340402,0.01590106 2027,0.030602276,0.028845076,0.0031,0.004648971,0.014740227,0.006230011,0.015638575 2028,0.034708225,0.031954597,0.0041,0.00549514,0.01672643,0.007055676,0.014517506 2029,0.03870731,0.035223786,0.0052,0.006382183,0.018795716,0.007787284,0.014285714 2030,0.042744258,0.038632821,0.0064,0.007290972,0.020961968,0.00836727,0.014107884 2031,0.047065478,0.042311359,0.0076,0.008282086,0.023241344,0.012746892,0.013114754 2032,0.05067827,0.046113609,0.0088,0.011613781,0.025899653,0.017202954,0.012135922 2033,0.054242268,0.050093033,0.0101,0.015095718,0.028746684,0.021744069,0.011990408 2034,0.058029931,0.054261347,0.0115,0.018689988,0.031792839,0.026408761,0.011049724 2035,0.061805295,0.058623584,0.0128,0.02239747,0.035045012,0.03119347,0.010140406 2036,0.065599308,0.063205823,0.0141,0.026289702,0.038513605,0.036068303,0.010802469 2037,0.069461654,0.068024218,0.0154,0.030263876,0.042212264,0.041016662,0.009916095 2038,0.073644444,0.073111477,0.0168,0.034395888,0.046157964,0.046044842,0.009803922 2039,0.078024442,0.078489045,0.0181,0.038654256,0.05037115,0.051194397,0.009694258 2040,0.08236624,0.084188681,0.0195,0.043037668,0.054875756,0.056475238,0.0095518 2041,0.08692423,0.090245672,0.0209,0.047537825,0.05970081,0.061942074,0.010130246 2042,0.091672315,0.096730306,0.0225,0.052165781,0.064876824,0.067648493,0.00997151 2043,0.096602544,0.103630805,0.0241,0.056937718,0.070458834,0.073669158,0.009810792 2044,0.101752576,0.111067707,0.0260,0.061868229,0.076510863,0.080132293,0.010337698 2045,0.107144716,0.119099561,0.0281,0.067068447,0.08310441,0.087133537,0.010847458 2046,0.112767746,0.127813583,0.0305,0.072366943,0.090329055,0.094821657,0.010666667 2047,0.118691542,0.137322508,0.0330,0.077984189,0.098252191,0.103382713,0.011147541 2048,0.124937814,0.147803902,0.0357,0.083842359,0.107206944,0.113064784,0.012250161 2049,0.131517969,0.159420973,0.0382,0.089952808,0.115940747,0.104514092,0.012674271 2050,0.132203469,0.172356152,0.0401,0.096334345,0.126792532,0.011650116,0.013707165 Year,PWBM,CBO,JCT,EY,DZ,OGUSA,GGM 2018,0,0.005788074,0.0185,0.005419556,0.002214848,-0.015433091,0.020396714 2019,0.003300599,0.005891166,0.0128,0.005190235,0.002517937,-0.015438785,0.020059878 2020,0.00306546,0.005989089,0.0115,0.004939059,0.002689777,-0.015518887,0.018804052 2021,0.002823498,0.005829442,0.0070,0.004671772,0.002878006,-0.015711889,0.017865391 2022,0.002372761,0.00569545,0.0069,0.004381967,0.003029483,-0.015928107,0.017051639 2023,0.002277175,0.005582555,0.0050,0.004071729,0.003154707,-0.016105594,0.016781878 2024,0.002046212,0.005499409,0.0076,0.003746093,0.003260289,-0.016378319,0.015908661 2025,0.001756763,0.00543827,0.0116,0.003402399,0.003351135,-0.01655981,0.01592699 2026,0.001782561,0.005395041,0.0061,0.0030241,0.003431241,-0.016868907,0.015742941 2027,0.001005832,0.005357685,0.0085,0.002626689,0.003504382,-0.017162663,0.014735282 2028,0.000913528,0.005320343,0.0089,0.002210123,0.003573931,-0.017487029,0.013742513 2029,0.000717638,0.005358903,0.0083,0.001762997,0.003643184,-0.017851434,0.013592414 2030,0.000490324,0.005440003,0.0065,0.001283561,0.003715752,-0.018281935,0.012875057 2031,0.000158383,0.00569379,0.0104,0.000362267,0.00379533,-0.018180167,0.011949244 2032,8.48403E-05,0.005825642,0.0096,0.000183934,0.003938222,-0.01808098,0.012380684 2033,-3.98934E-06,0.005970681,0.0086,4.13233E-05,0.004123837,-0.01797458,0.011478385 2034,-5.60496E-05,0.006142141,0.0083,-8.69228E-05,0.004353565,-0.01784265,0.011362069 2035,-4.52806E-05,0.006328799,0.0057,-0.000217797,0.004628938,-0.0176874,0.011709476 2036,-3.38061E-06,0.00654568,0.0063,-0.000308825,0.004952256,-0.017507814,0.01155225 2037,-4.60597E-05,0.006786296,0.0073,-0.000376911,0.005326564,-0.017293693,0.012130406 2038,-6.53263E-05,0.007061587,0.0073,-0.000427946,0.005755704,-0.017039913,0.012175814 2039,-2.80706E-05,0.007364642,0.0068,-0.000455476,0.006244703,-0.016737433,0.012211164 2040,9.90958E-05,0.007706468,0.0078,-0.000458921,0.006799014,-0.016377656,0.012519702 2041,0.000253458,0.00809018,0.0070,-0.000430188,0.007425465,-0.0159399,0.013027978 2042,0.0003369,0.00855374,0.0044,-0.000366436,0.008134996,-0.015406439,0.013031822 2043,0.000610215,0.009018946,0.0060,-0.000265028,0.008939577,-0.014745798,0.013505572 2044,0.000755459,0.009573249,0.0072,-0.000123451,0.009853353,-0.013917398,0.014623072 2045,0.00094788,0.010202222,0.0102,5.6984E-05,0.010892797,-0.012881752,0.01503885 2046,0.00116785,0.010913956,0.0091,0.000289935,0.012078916,-0.011573927,0.016280864 2047,0.001354149,0.011726469,0.0108,0.000566169,0.013437927,-0.009903261,0.016840025 2048,0.001636265,0.012713531,0.0143,0.000896432,0.015013879,-0.007747537,0.017821855 2049,0.002203082,0.013888583,0.0146,0.001282306,0.0167764,-0.007804256,0.018765268 2050,0.002130705,0.01523904,0.0177,0.001727061,0.018550003,-0.020487349,0.020467652 Year,PWBM,CBO,JCT,EY,DZ,OGUSA,GGM 2018,0,0.00518243,0.0118,0.00178,-0.001311387,-0.010836548,0.020231214 2019,0.00220664,0.006156152,0.0078,0.00195,0.000487247,-0.010347457,0.02 2020,0.003728063,0.007148579,0.0068,0.00206,0.001125504,-0.010111243,0.019774011 2021,0.004793765,0.007937249,0.0039,0.00218,0.001611022,-0.009937962,0.019589552 2022,0.00576798,0.008763904,0.0039,0.00234,0.001987641,-0.00977786,0.019408503 2023,0.006936996,0.009626338,0.0029,0.00246,0.002309746,-0.009591985,0.020164986 2024,0.008121617,0.010533368,0.0049,0.00260,0.002600298,-0.009468266,0.02 2025,0.009322092,0.011479496,0.0079,0.00274,0.002871181,-0.009272691,0.020739405 2026,0.010685021,0.0124651,0.0049,0.00295,0.003129509,-0.009152836,0.02066487 2027,0.01145042,0.013480435,0.0069,0.00305,0.003380211,-0.009038546,0.021505376 2028,0.012757321,0.014522183,0.0076,0.00317,0.003627451,-0.008966667,0.021428571 2029,0.014006441,0.015666144,0.0078,0.00329,0.003875088,-0.00895372,0.022261799 2030,0.015253079,0.016884006,0.0072,0.00340,0.004126476,-0.009036558,0.022182786 2031,0.016592181,0.018304848,0.0103,0.00333,0.004384748,-0.007465437,0.02214349 2032,0.017757466,0.019684699,0.0104,0.00446,0.004831878,-0.005873898,0.022988506 2033,0.018886098,0.021130706,0.0103,0.00565,0.005347839,-0.004252743,0.02292769 2034,0.02009413,0.022654742,0.0108,0.00688,0.00593122,-0.002576973,0.023746702 2035,0.021304984,0.024250998,0.0098,0.00815,0.006580561,-0.000849506,0.025438596 2036,0.022555036,0.025937735,0.0108,0.00947,0.00729663,0.000919975,0.025327511 2037,0.023807234,0.027715471,0.0121,0.01084,0.008081569,0.002732483,0.026956522 2038,0.025151098,0.029602076,0.0127,0.01224,0.008938677,0.004593466,0.027657736 2039,0.026596485,0.031598953,0.0130,0.01369,0.009872495,0.006522788,0.029184549 2040,0.028033287,0.033724016,0.0143,0.01517,0.010888096,0.008529322,0.029812606 2041,0.029572817,0.035989868,0.0145,0.01669,0.011992725,0.010644925,0.031223629 2042,0.03117581,0.038443715,0.0136,0.01824,0.013179531,0.012898972,0.031746032 2043,0.03284458,0.041028395,0.0154,0.01985,0.014463586,0.015336876,0.033912324 2044,0.034571778,0.04384105,0.0171,0.02150,0.015861669,0.018027645,0.035159444 2045,0.036399233,0.046888956,0.0200,0.02321,0.017390733,0.021028451,0.036319613 2046,0.038291337,0.05020407,0.0204,0.02498,0.019069783,0.02443073,0.037450199 2047,0.0402765,0.053832448,0.0227,0.02681,0.020901434,0.028355029,0.040125885 2048,0.042382769,0.057878422,0.0263,0.02871,0.023074542,0.032962852,0.04189294 2049,0.04473137,0.062399474,0.0277,0.03068,0.024650485,0.030140669,0.044376435 2050,0.044898335,0.067440117,0.0306,0.03273,0.02989402,-0.009351013,0.046792453
2019-06-19 17:06:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2888096868991852, "perplexity": 8578.937657467548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999003.64/warc/CC-MAIN-20190619163847-20190619185847-00263.warc.gz"}
http://openstudy.com/updates/55d60393e4b0cfce0811a984
## Abhisar one year ago I was teaching a kid about Elastic Head on collision and he was having some trouble deriving few relations so I am doing this post to help him and others looking for similar content. $$\bigstar$$ There is a typographical mistake below in Eq.3. The correct equation should be $$\sf Va+{V_a}^{'}={V_b}^{'}$$ $$\bigstar$$ There is one another typographical mistake in the statement just below Eq.1. The correct statement should be: Also, in both elastic or inelastic collision momentum is conserved i.e. total initial momentum of the system is equal to the total final momentum of the system. 1. Abhisar Let's suppose that a body of mass $$\sf M_a$$ is travelling with a velocity $$\sf V_a$$ along a straight line and it collides with another stationary body of mass $$\sf M_b$$. Consider the collision to be elastic in nature. 2. Abhisar $$\huge \bigstar$$ Derive an equation for final velocities $$\sf V_a^{'}~and~V_b^{'}$$ in terms of $$\sf M_a, M_b~and~V_a$$. Since, the collision is elastic we can say that final kinetic energy of the system is equal to the initial kinetic energy. $$\sf \Rightarrow \frac{1}{2}{M_aV_a}^2 = \frac{1}{2}M_a{V_a^{'}}^2+\frac{1}{2}M_b{V_b^{'}}^2$$ $$\sf \Rightarrow {M_a(V_a}^{2}-{V_a^{'}}^2)=M_b{V_b^{'}}^2$$ $$\Rightarrow \sf M_a(V_a-V_a^{'})(V_a+V_a^{'})=M_b{V_b^{'}}^2$$ ....Eq.1 Also, in both elastic or inelastic collision momentum is conserved i.e. total initial energy of the system is equal to the total final energy of the system. $$\sf \Rightarrow M_aV_a=M_aV_a^{'}+M_bV_b^{'}$$ $$\sf \Rightarrow M_a(V_a-V_a^{'})=M_bV_b^{'}$$ ...........Eq.2 Dividing Eq.1 with Eq.2 we get, $$\sf V_a+{V_b}^{'}={V_b}^{'}$$ .........Eq.3 Substituting this value in Eq.2 we get, $$\boxed{\sf {V_a}^{'}=\frac{V_a(M_a-M_b)}{M_a+M_b}}$$ Substituting this value in Eq.3 we get, $$\sf \boxed{{V_b}^{'}=\frac{2M_1V_1}{M_1+M_2}}$$ 3. IrishBoy123 4. IrishBoy123 and the "ie" here is a non sequitur https://gyazo.com/64cbadb3997368b8e712800f0163f5c7 because it confuses/conflates conservation of momentum with conservation of energy 5. Abhisar Thanks @irishboy123 , It should be, $$\sf Va+{V_a}^{'}={V_b}^{'}$$ 6. arindameducationusc yes, even I was wondering .... Thanks to @irishboy123 And Awesome derivation Abhisar, It was very useful and got a good revision. Thank you 7. Abhisar I am glad you found it helpful c:
2017-01-22 08:56:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7212191820144653, "perplexity": 558.8093467981719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00486-ip-10-171-10-70.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/24049/projective-module-over-rx
# Projective module over $R[X]$ Let $(R,m)$ be commutative noetherian local ring with unity. Suppose $P$ is a finitely generated projective module over $R[X]$ of rank $n$ . Is $P$ free? If not, what is the counter example? - Well the answear is affirmative by obvious application of serre's conjecture if $(R,m)$ is zero dimensional. In general I expect a counter example. But I am not getting it. thanking you –  A.G Feb 27 '11 at 20:11 According to Wikipedia a counterexample occurs with R equal to the local ring of the curve $y^2 = x^3$ at the origin. It doesn't state what the counterexample is or provide a reference. en.wikipedia.org/w/… –  George Lowther Feb 27 '11 at 21:00 @Anjan: the Quillen-Suslin solution of Serre's conjecture actually showed a bit more, and in particular gives an affirmative answer to your question when $R$ is a DVR. Moreover, it a famous conjecture of Bass and Quillen that your question should have an affirmative answer if you add the hypothesis of regularity (since a regular one-dimensional local ring is a DVR, this is exactly what we did above). So I think you should look for counterexamples among singular one-dimensional local rings. (I don't know of one off the top of my head, but I also suspect they should exist.) –  Pete L. Clark Feb 27 '11 at 21:03 Another good place to look would be Lam's new(ish) book Serre's Problem on Projective Modules. (I do not yet have a copy, or I would tell you whether a counterexample can be found there.) –  Pete L. Clark Feb 27 '11 at 21:05 @Pete: If $R$ is a DVR there's a much easier proof (this is from Lam's book you mentioned): Start from Kaplansky's theorem that every submodule of a free module over a left hereditary ring (every left ideal is projective) is a sum of left ideals (see Lam, Lectures on modules and rings, (2.24), p. 42). For a DVR $R$ every left ideal $I$ of $R[X]$ is of the form $R[X] \cdot f$ where $f$ is a polynomial in $I$ of minimal degree (by the division algorithm). Since $R[X]$ has no zero-divisors, we have $I \cong R[X]$. Thus Kaplansky's theorem shows that every projective module over $R[X]$ is free. –  t.b. Feb 27 '11 at 21:55 Here is some elaboration on the wiki entry in George's comment. Suppose $R$ is a domain. $R$ is called seminormal if whenever $b^2=c^3$ in $R$ one can find $t \in R$ such that $b=t^3, c=t^2$. The relevant thing here is the following fact: R is seminormal if and only if $Pic(R) \cong Pic(R[X])$ So if $R$ is local and not seminormal then there will be a projective, non-free $R[x]$-module of rank $1$. As for an implicit example, take $R = k[t^2,t^3]_{(t^2,t^3)}$. One can check that $I = (1-tx, t^2x^2)$ is an invertible (fractional) ideal of $R[x]$ which is non-free. UPDATE: by request, a reference is this survey, see page 16. I am sure you can find more by googling the relevant terms. - Is $1-tx$ even in $R[x]$? –  George Lowther Feb 27 '11 at 22:26 George, it is inside the quotient field. –  curious Feb 27 '11 at 22:28 Ok, yes, its a fractional ideal (I was forgetting that invertible ideal means invertible fractional ideal by definition). –  George Lowther Feb 27 '11 at 22:32 Can you tell me a reference for the proof of the fact that $R$ is seminormal iff $Pic(R) \cong Pic(R[X])$? Thanks for this nice answer –  A.G Feb 28 '11 at 4:51
2015-05-24 03:12:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9209491610527039, "perplexity": 257.51677929940155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927824.81/warc/CC-MAIN-20150521113207-00255-ip-10-180-206-219.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/485003/remove-hyphens-from-chapter-section-titles
# Remove hyphens from chapter, section titles How do I remove hyphenating from all the \part{}, \chapter{}, \section{}, etc., titles of a document? • It will depend on the specifics of your document, a sample MWE for which you have not posted. Going ragged right will prevent hyphenation and is often done with large point-size title lines. – Steven B. Segletes Apr 15 at 18:00 • For example, in article class, there is a definition \newcommand\section{\@startsection {section}{1}{\z@}% {-3.5ex \@plus -1ex \@minus -.2ex}% {2.3ex \@plus.2ex}% {\normalfont\Large\bfseries}}. The term \raggedright can be added after \bfseries to remove full justification and thus hyphenation. But don't go editing your class files directly. Tell us more about your document class and preamble. – Steven B. Segletes Apr 15 at 18:05 • @StevenB.Segletes I'm using \documentclass[11pt]{book}. – Geremia Apr 15 at 18:06 When disabling hyphenation, be sure to switch from full justification to \raggedright, or \centering in the case of part-level headers Since you're using the book document class, I suggest you employ the facilities of the sectsty package: \usepackage{sectsty} \partfont{\centering} \chapterfont{\raggedright} \allsectionsfont{\raggedright} Hyphenation only occurs with full justification (unless the ragged2e package is employed, hat tip Mico). For large point-size titles, it is frequently disabled by invoking \raggedright. That can be made standard for your document by redefining that particular sectioning command. For example, you can go to the book documentclass, copy the definitions of the sectioning that requires modification, and use \makeatletter...\makeatother redefinitions to add \raggedright. I demonstrate it here, mid-document, but you should do it in the preamble. These sort of changes can also be done with the titlesec package, but I will let someone else post that. \documentclass{book} \begin{document} \section{This is a long title that requires extraordinary hyphenation} \makeatletter \renewcommand\section{\@startsection {section}{1}{\z@}% {-3.5ex \@plus -1ex \@minus -.2ex}% {2.3ex \@plus.2ex}% {\normalfont\Large\bfseries\raggedright}} \makeatother \section{This is a long title that requires extraordinary hyphenation} \end{document}
2019-11-12 03:50:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8867586255073547, "perplexity": 5438.987506498313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664567.4/warc/CC-MAIN-20191112024224-20191112052224-00292.warc.gz"}
https://cms.math.ca/CMS/Events/winter98/w98-abs/node160.html
location:  Events → see all events Next: Stephen D. Theriault - -2r Up: 2)  Homotopy Theory / Théorie Previous: Laura Scull - Rational S-equivariant ## Paul Selick - Natural decompositions of loop suspensions and tensor algebras PAUL SELICK, Department of Mathematics, University of Toronto, Toronto, Ontario  M5S 3G3, Canada Natural decompositions of loop suspensions and tensor algebras (Joint work with Jie Wu). Consider the full subcategory of pointed topological spaces whose objects are simply connected suspensions of finite type. For X in the above category we examine natural decompositions of localized at a prime p as a product (up to homotopy) of other spaces. Since as a Hopf algebra, is isomorphic to the tensor algebra T(V), where , any such decomposition yields a natural coalgebra decomposition of T(V) (which need not be a Hopf algebra decomposition since we have not required our decomposition to respect the H-space structure on ). We have shown that the converse is true: every natural decomposition of T(V) can be geometrically realized as a natural decomposition of the space . Having thus translated the problem to algebra, we next consider the algebraic problem of finding natural coalgebra decompositions of tensor algebras. We show that there is a natural coalgebra decomposition of T(V) (natural with respect to the vector space V) where contains V itself and is minimal in the sense that it is (up to isomorphism) a retract of any coalgebra containing V which is a natural retract of T(V). The coalgebra Bn(V) is the smallest natural coalgebra retract containing a certain submodule described below. This decomposition generalizes that given by the Poincaré-Birkhoff-Witt Theorem, except it is natural with respect to maps of vector spaces, whereas PBK is natural only with respect to maps of ordered vector spaces. Some properties of and of the product of all the other factors are as follows. B(V) is a sub-Hopf-algebra of T(V) which is a retract as a coalgebra. We show that, as conjectured by Cohen, the only primitives in A(V) occur in weights of the form pt. Also, A(V) has a filtration where each of the filtration quotients is a polynomial algebra. A description of the generators for these polynomial algebras is given for the first p2-1filtration quotients, computation of the others remaining beyond our present capabilities. One important aspect of this work is its relationship to the representation theory of the symmetric group . It provides some information about the important -module Lie(n)described below which has arisen in many contexts and appears in current work of Cohen, Dwyer, Arone, and others. To define Lie(n)consider the vector space V with basis . There is an action of on (V) (and thus on T(V)) given by for . Let be the subspace of spanned by . Let Ln(V) be the primitives of weight'' nin T(V) which are indecomposable (i.e. not p-th powers). Explicitly Ln(V) consists of commutators of length n in the elements of V. Let . Let and let . We show that is a projective -submodule of Lie(n) and that any projective -submodule of Lie(n) is a retract (up to isomorphism) of . If n is invertible modulo p then it is well known that Lie(n) is itself projective and easy to see that . In particularly, in characteristic 0, for all n. Next: Stephen D. Theriault - -2r Up: 2)  Homotopy Theory / Théorie Previous: Laura Scull - Rational S-equivariant top of page | contact us | privacy | site map |
2019-12-13 05:57:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7761550545692444, "perplexity": 868.8883757033077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540548544.83/warc/CC-MAIN-20191213043650-20191213071650-00303.warc.gz"}
http://mathhelpforum.com/algebra/53018-factoring-polynomial.html
1. ## Factoring a polynomial Ive got a little problem with this. How do i factor a polynomial over a ternary field? I know how to do it in binary... in binary: x^7 - 1 = x^7 - 1 = (x+1)(x^3 + x + 1)(x^3 + x^2 + 1) i get that and now the question is how do i do it for ternary field x^5 - 1 = ??? thx a lot for help 2. First in any ring you can write $x^5-1=(x-1)(x^4+x^3+x^2+1)$ Confince yourself that $x^4+x^3+x^2+1$ has no linear factor by checking that this polynom has no ternary root. It mean either this polynomial can't be factor anymore or $x^4+x^3+x^2+1=(x^2+Ax+B)(x^2+C+D)$ Since BD=1 then B=D=1 or B+D=-1. Now you can playing around with A,C and see whether we can find a suitable A and C. 3. one more thing...do any special rules apply like in the binary case? For example in the binary field i could say that x + x = 0... can i do the same for ternary field?
2017-04-28 08:28:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2821262776851654, "perplexity": 687.9734022201759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122886.86/warc/CC-MAIN-20170423031202-00163-ip-10-145-167-34.ec2.internal.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/20825/how-is-i-rhoqc-i-cc-rhoqc
# How is $I(\rho^{QC})=I_{CC}(\rho^{QC})$ On page 3 of this paper, for the proof of theorem 1, it states that, using Lemma 2 from the previous page, that if $$I(\Lambda_{A}\otimes\Gamma_{B})[\rho]=I(\rho))$$ then there exists $$\Lambda_{A}^{*}$$ and $$\Gamma_{B}^{*}$$ s.t $$(\Lambda_{A}^{*}\otimes\Gamma_{B}^{*})\circ(\Lambda_{A}\otimes\Gamma_{B})[\rho]=\rho$$Using this, they show that $$\rho^{QC}=(M^{*}\otimes I)[\rho^{CC}]$$ where $$\rho^{CC}=(M\otimes N)[\rho]$$ with the assumption that $$I(\rho^{CC})=I(\rho)$$, so $$M^{*}$$ and $$N^{*}$$ exist. However, they state that $$I(\rho^{QC})=I_{CC}(\rho^{QC})=I_{CC}(\rho)=I(\rho)$$ I understand the last equality fine. But how are they getting the second equality. $$(M\otimes N)\rho^{QC}=(M\otimes N) \circ (M^{*}\otimes I)[\rho^{CC}]=\sum_{ij}p_{ij}|i\rangle\langle i|\otimes N(|j\rangle\langle j|)$$ and so $$\rho_{CC}^{QC}=\sum_{ij}p_{ij}|i\rangle\langle i|\otimes N(|j\rangle\langle j|)=(M\otimes N)\rho^{QC}$$ so $$(M^{*}\otimes N^{*}) \circ (M\otimes N)[\rho^{QC}]=\rho^{QC}$$ and so $$I_{CC}(\rho^{QC})=I(\rho^{QC})$$ But how do they get $$I_{CC}(\rho^{QC})=I_{CC}(\rho)$$ The only thing I can think of is $$(I \otimes N^{*}) \circ (I \otimes N)[\rho_{CC}]=\rho_{CC}$$ which should mean that $$I(\rho_{CC}^{QC})=I_{CC}(\rho)$$, but I am not sure if this reasoning is correct. Edit: $$\Lambda$$, $$\Gamma$$, $$M$$, $$M^{*}$$, $$N$$ and $$N^{*}$$ are all quantum maps. $$M$$ and $$N$$ are local quantum-to-classical measurement maps. CC and QC, per the paper, mean the classical-classical and quantum-classical states resulting from the application of the maps. • Can you define the relevant symbols you're using in the question? Aug 12 at 12:47 • @Rammus does rhe edit work? Aug 12 at 12:53
2021-10-22 07:06:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 25, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9090960025787354, "perplexity": 197.07162736553713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585460.87/warc/CC-MAIN-20211022052742-20211022082742-00004.warc.gz"}
http://mathimatikoi.org/old/index.php/example-of-sequence-of-functions-entire-topic-moved-to-new-forum
## Example of sequence of functions [entire topic moved to new forum] Grigorios Kostakos on Sunday, October 25 2015, 03:07 AM 0 Give an example of a sequence of discontinuous functions which converges uniformly to a continuous function. Grigorios Kostakos • Replied by Apostolos J. Kos on Sunday, October 25 2015, 10:03 AM · Hide · #1 $$f_n(x)= \left\{\begin{matrix} 0 & , &x\neq 0 \\ 1/n &, & x=0 \end{matrix}\right., \;\;\;\;\;\;\; g_n(x)= \left\{\begin{matrix} 1/n & , & x \in \mathbb{Q}\\ 0&, &{\rm elsewhere} \end{matrix}\right.$$ Verification immediate. • • Replied by Apostolos J. Kos on Monday, November 02 2015, 11:39 AM · Hide · #2 I don't want to open a new thread, so I post this interesting question here. Give an example of a sequence of infinitely differentiable functions that converge uniformly to zero but whose the sequences of derivatives diverge everywhere.
2017-11-24 18:43:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7419215440750122, "perplexity": 2218.3887011983966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808742.58/warc/CC-MAIN-20171124180349-20171124200349-00367.warc.gz"}
https://mailman.ntg.nl/pipermail/dev-luatex/2007-September/000895.html
# [Dev-luatex] Some questions Hans Hagen pragma at wxs.nl Tue Sep 18 11:46:39 CEST 2007 ```Jonathan Sauer wrote: > Hello, > > trying out the LuaTeX snapshot 20070820, I have a couple of questions: > > - Is it safe to add to the LuaTeX library tables (tex, callback), i.e. > a callback.push function? Or should these support functions have > their own table? ("callback_utils") you can indeed use those tables as any table, i.e. function tex.mystuff() ... end is legal however, there is no guarantee that future versions of luatex will not use the same names as you do; even if (widespread) macropackages will have their tex.myfunction spread all over the world, future luatex's may define its own myfunction in the tex namespace; of course one can save its meaning before redefining it; so ... it is possible but may have future side effects > - How do I generate an error when inside Lua code? I could use > tex.write("\\errmessage{...}"), but then the error would only be > generated after the Lua code has finished executing, and after > any TeX code created previously using tex.write et.al. has > been executed. - assert - just print messages using texio.write_nl - os.exit() also works > - How do I access control sequences from Lua code? If I have a > macro \test, how would I get the tokens of this macro when inside > Lua? (without passing them from TeX) Is there a table tex.macro, > just like the tex.count, tex.toks etc. tables? no, you print them to tex using tex.print and tex.sprint; you're either is lua or in tex; something in between would become extremely messy Hans -----------------------------------------------------------------
2017-01-22 18:23:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8449837565422058, "perplexity": 14466.324192671684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00224-ip-10-171-10-70.ec2.internal.warc.gz"}
http://nrich.maths.org/5937/clue
### Game of PIG - Sixes Can you beat Piggy in this simple dice game? Can you figure out Piggy's strategy, and is there a better one? ### Distribution Maker This tool allows you to create custom-specified random numbers, such as the total on three dice. ### The Random World Think that a coin toss is 50-50 heads or tails? Read on to appreciate the ever-changing and random nature of the world in which we live. # Scale Invariance ##### Stage: 5 Challenge Level: Don't forget that probability distribution functions must integrate to $1$ over the allowed range of values. Try changing variables for the first part. For the second part, note that clearly $x^2\rightarrow (ax)^2 \neq a(x^2).$ How could you make the two sides match for other powers of $x$?
2015-08-03 21:28:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.716366708278656, "perplexity": 1397.509288014681}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990114.79/warc/CC-MAIN-20150728002310-00200-ip-10-236-191-2.ec2.internal.warc.gz"}
https://de.maplesoft.com/support/help/maple/view.aspx?path=examples%2FQDifferenceEquations
The QDifferenceEquations Package - Maple Programming Help Home : Support : Online Help : Applications and Example Worksheets : Discrete Mathematics : examples/QDifferenceEquations The QDifferenceEquations Package The  QDifferenceEquations package provides algorithms for solving linear q-difference (q-recurrence) equations or systems in terms of polynomials or rational functions. Let K be a field and  q an indeterminate over K. A linear q-difference equation with polynomial coefficients has the form ${a}_{n}{Q}^{n}y+{a}_{n-1}{Q}^{n-1}y+\mathrm{...}+{a}_{1}Qy+{a}_{0}y=b$ , where ${a}_{n},{a}_{n-1},\mathrm{...},{a}_{1},{a}_{0},b$  are polynomials in $x$  with coefficients from $K\left(q\right)$  and $Q$  is the q-shift operator $\left({Q}^{i}y\right)\left(x\right)=y\left({q}^{i}x\right)$  for all integers $i$ . (This is a multiplicative analog of the ordinary shift operator $\left({E}^{i}y\right)\left(x\right)=y\left(x+i\right)$ .) The goal is to find all solutions $y$ that are polynomials or rational functions with coefficients from $K\left(q\right)$. More generally, a system of such equations has the same form as above, but now $y$  is a vector of $m$ unknown functions, a[n], ..., a[0], which are $m$ by $k$ matrices with polynomial entries, and $b$ is a vector with $k$ polynomial entries. As in the case of (systems of) ordinary difference equations, the polynomial (or rational) solutions form a finite-dimensional vector space over $K\left(q\right)$ . Note: The Maple LREtools package provides methods for solving ordinary difference equations or systems. For information on solving systems of ordinary or q-difference equations, see the LinearFunctionalSystems package.
2021-01-17 22:47:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 99, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9580805897712708, "perplexity": 1053.652789797679}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513194.17/warc/CC-MAIN-20210117205246-20210117235246-00667.warc.gz"}
https://martinvb.com/wp/analog-to-digital-percussions/
# Analog-to-digital percussions tl;dr: jump to the end for a video where I hit a thing that sounds like other things. Percussions are as cool as they are broadly defined. Hit a thing once – that’s noise. Hit it again rhytmically – that’s percussion. The constraint is, you need an it to hit: a drum, cymbal, triangle, tambourine, you name it. Something physically instantiated in this world, of course. But this also means owning them, carrying them around, setting them up, keeping them in tune, and all the jazz associated with objects that exist. So, in a world were atoms turn into bits (unintentional fission pun), I set out to find how good digital percussions could be. ## The it to hit The first obvious step was scouting the options for an actual input device. Digital percussion modules/pads are a thing, and they come in a variety of shapes and target audiences – ranging from Roland’s drumstick-affine Octapads to Akai’s fingerhitting-good MPCs. However, if we filter for something that is playable with bare hands, compact, and somewhat similar to a drum/tambourine… Than the options dry out really quick: only 3 actual contenders are left: At the left, we see Korg’s Wavedrum. The demo performance is worth checking out: the responsivity is out of this world – even jazz brushes work. The product, however, has no MIDI output and lacks any sort of media input: you’re basically stuck with the built-in sounds. Also, having been released in 2009, it reeks of discontinuation. Too much of a risk for the >500€ price tag. Next in line, there’s the Roland HandSonic HPD-20, which seems to be de facto standard in its niche. MIDI out and an SD card for user samples are present, and the multi-region pad make for a highly configurable system. The issue? A ~1k€ price tag. *shrug* Last but not least, I stumbled upon Keith McMillen’s BopPad. As showcased, each of its 4 quadrants can indenpendently output velocity, pressure and radius information – all fully configurable -, opening quite a lot of possibilities. Different from the other contenders however, this 200€ module isn’t a standalone system, requiring a computer for actual sound generation/output. This wasn’t really an issue for me, so put a ring on it. ## From hit to bit, and back The quest now was for decent sound samples. Percussion VSTs exist in countless shapes, forms, styles and price ranges – from high-end orchestral ensembles to free collections of latin one-shots. However, for realistic results, multi-sampled instruments are a must – and here, again, the options dry out. After some searching, I converged to Sonivox’ Atsia Percussion, which offers a rich selection of drums, bells and shakers. Samples are provided in a way that hits to different regions of e.g. a drum (say, center, edge or rim) are mapped to different MIDI notes – which is precisely what I was looking for. Though it isn’t free, the Atsia Percussion‘s 20USD price tag felt more than fair. ## Hit the brick (wall) A problem arose when trying to put the VST and the BopPad together, however. As we see above, in (a), when playing a drum/tambourine, different sounds can be produced by hitting the skin at different radial regions (there’s way more than that, but that’s a start). The BopPad, as seen in (b), however, offers a 1-note-per-quadrant approach. Sure, I could just map each drum’s region to a quadrant and be done with it, but I felt it’d really rob the intuitiveness and playability of the underlying instrument. It’d just be weird. So, the goal became (c): implement a more natural mapping/playability for the BopPad. ## Another bit in the wall Using BopPad’s provided settings editor, I started off by configuring it to emit not only a note when hit, but also the radial position at which the hit happened*, as a MIDI CC: This information needed to be unpacked on the host side, so I once again took advantage of REAPER‘s (my DAW of choice) scripting capabilities**. Long story short, I cooked up a little JSFX plugin that waits for the radial CC information, maps it and reemits it to the DAW as a configurable note: The start of the show, _bpq_map_note(), does what you’d expect – mapping the radial position to an user-defined note in an array***: The playable notes are defined directly in the plugin’s UI. That allows the user to save different note configurations for different instruments directly in the DAW, and hot-swap them (even live), without having to reconfigure anything on the BopPad itself. You can check out the full code here. It ended up doing quite a bit more: it can produce different CCs, pitch bend, map quadrants to different MIDI channels, etc. It uses REAPER’s built-in slider controls, so the UI is… peculiar. Yet, the functionality is all there. * There are some more less relevant details to this setup; you can check out the import-able configuration for the BopPad here. ** I avoided a lot of details in this single sentence. If you’re interested in REAPER and the JSFX/EEL2 programming environment, this post is a gold mine. *** Arrays in JSFX/EEL2 are weird. This and this were very useful resources on the subject; the code I stole is also interesting to look at. ## Hit it Astonishingly, it actually works. In the video below, I’m switching between a preset for a djembe (from Atsia Percussion) and a handpan (powered by AmpleSound’s Cloud Drum, a really nice free VST), both of which use different mapping schemes configured directly in the plugin. ## Get it REAPER supports ReaPack, a cool and straightforward package manager, and Martins BopPad Mapper can be obtained through it*. To get it, 1. Install ReaPack 2. Add the repository to the ReaPack, by opening the “Import repositories…” option, and pasting this URL into it: https://raw.githubusercontent.com/MartinBloedorn/ReaThings/master/JSFX/index.xml 3. “Browse packages…” and search for “martins”. Select the package, mark it for install and apply the changes. 4. Restart REAPER, and voilà, the plugin will be visible. * Warm thanks to cfillion and sai’ke for respectively documenting and providing examples for the index format. ’til next time.
2022-12-07 01:11:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31391942501068115, "perplexity": 4385.5879423445895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711121.31/warc/CC-MAIN-20221206225143-20221207015143-00256.warc.gz"}
http://physics.stackexchange.com/questions/63253/proof-for-commutator-relation-hath-hata-hbar-omega-hata
# Proof for commutator relation $[\hat{H},\hat{a}] = - \hbar \omega \hat{a}$ I know how to derive below equations found on wikipedia and have done it myselt too: \begin{align} \hat{H} &= \hbar \omega \left(\hat{a}^\dagger\hat{a} + \frac{1}{2}\right)\\ \hat{H} &= \hbar \omega \left(\hat{a}\hat{a}^\dagger - \frac{1}{2}\right)\\ \end{align} where $\hat{a}=\tfrac{1}{\sqrt{2}} \left(\hat{P} - i \hat{X}\right)$ is a annihilation operator and $\hat{a}^\dagger=\tfrac{1}{\sqrt{2}} \left(\hat{P} + i \hat{X}\right)$ a creation operator. Let me write also that: \begin{align} \hat{P}&= \frac{1}{p_0}\hat{p} = -\frac{i\hbar}{\sqrt{\hbar m \omega}} \frac{d}{dx}\\ \hat{X}&=\frac{1}{x_0} \hat{x}=\sqrt{\frac{m\omega}{\hbar}}x \end{align} In order to continue i need a proof that operators $\hat{a}$ and $\hat{a}^\dagger$ give a following commutator with hamiltonian $\hat{H}$: \begin{align} \left[\hat{H},\hat{a} \right] &= -\hbar\omega \, \hat{a}\\ \left[\hat{H},\hat{a}^\dagger \right] &= +\hbar\omega \, \hat{a}^\dagger \end{align} These statements can be found on wikipedia as well as here, but nowhere it is proven that the above relations for commutator really hold. I tried to derive $\left[\hat{H},\hat{a} \right]$ and my result was: $$\left[\hat{H},\hat{a} \right] \psi = -i \sqrt{\frac{\omega \hbar^3}{4m}}\psi$$ You should know that this this is 3rd commutator that i have ever calculated so it probably is wrong, but here is a photo of my attempt on paper. I would appreciate if anyone has any link to a proof of the commutator relations (one will do) or could post a proof here. - Start with your $\hat{H} = \hbar \omega \left( \hat{a}^\dagger\hat{a} + \frac{1}{2} \right)$. I will omit hat notation from this point. The commutator then reads as $$\left[ H, a \right] = \hbar \omega \left[ \left( \hat{a}^\dagger\hat{a} + \frac{1}{2} \right) a - a \left( \hat{a}^\dagger\hat{a} + \frac{1}{2} \right) \right] = \hbar \omega \left( a^\dagger a a - a a^\dagger a \right) ,$$ which is nothing but $$\left[ H, a \right] = \hbar \omega (a^\dagger a - a a^\dagger)a = \hbar \omega \left[ a^\dagger, a \right]a,$$ but we know that $$\left[a^\dagger, a \right] = -1 ,$$ therefore $$\left[ H, a \right] = -\hbar \omega a,$$ QED. On the Wikipedia page you link to there is a derivation of the commutation relation between $\hat{a}$ and $\hat{a}^{\dagger}$, $$[\hat{a},\hat{a}^{\dagger}] = 1.$$ This directly leads to (use the relation $[AB,C]=[A,C]B+A[B,C]$) $$[\hat{a}^{\dagger}\hat{a},\hat{a}] = -\hat{a} , \qquad [\hat{a}^{\dagger}\hat{a},\hat{a}^{\dagger}] = +\hat{a}^{\dagger}.$$ Up to a constant this is the same as $[\hat{H},\hat{a}]$ and $[\hat{H},\hat{a}^{\dagger}]$.
2014-09-22 04:23:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.9960041046142578, "perplexity": 486.3397989712718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657136545.14/warc/CC-MAIN-20140914011216-00228-ip-10-234-18-248.ec2.internal.warc.gz"}
http://mathhelpforum.com/statistics/183374-when-use-addition-multiplication-probability-print.html
# When to use addition or multiplication in probability? • Jun 21st 2011, 04:15 AM bullshark818 When to use addition or multiplication in probability? I am confused about "and" and "or" problems. For example: When drawing a single card, what is the probability of getting a 4 or a diamond? When drawing a single card, what is the probability of getting a jack and a black card? Do the words and/or dictate whether or not I am supposed to use addition or multiplication? My teacher has not done a good job of explaining this. Thanks in advance if you can help me. (Smile) • Jun 21st 2011, 06:41 AM Plato Re: When to use addition or multiplication in probability? Quote: Originally Posted by bullshark818 I am confused about "and" and "or" problems. Do the words and/or dictate whether or not I am supposed to use addition or multiplication? My teacher has not done a good job of explaining this. Actually those words have very little relation to the usage of addition or multiplication. The most widely used set of axioms for probability contains three axioms. One theorems we can prove is: $\mathcal{P}(A\text{ or }B)= \mathcal{P}(A)+ \mathcal{P}(B)- \mathcal{P}( A\text{ and }B).$ I chose that theorem to illustrate the ‘mixed’ nature of your question. Let’s examine those two words. $\mathcal{P}( A\text{ and }B)$ means the probability of both events $A~\&~B$ occurring together. Both happen. On the other hand, $\mathcal{P}( A\text{ or }B)$ means the probability of at least one of the events $A~\&~B$ occurs. One or both happen. Finding what $\mathcal{P}( A\text{ and }B) =~?$, can be easy or difficult. If events $A~\&~B$ are independent then $\mathcal{P}( A\text{ and }B)= \mathcal{P}(A)\cdot \mathcal{P}(B)$. But if they are dependent then things get more complicated. So you need to stay tuned in the course for what is to come.
2016-09-24 21:52:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6915605664253235, "perplexity": 389.31246353949854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738659496.36/warc/CC-MAIN-20160924173739-00232-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.risk.net/foreign-exchange/1502012/normandy-trims-hedging-book-newmont-closes
# Normandy trims hedging book as Newmont closes in Normandy has quietly been reducing its minimum hedge ratio from 60% to 45% of reserves using 12 to 13 dealer during the past few weeks, said one gold trader. The company issued a statement yesterday confirming its minimum hedge ratio had hit 45%. This led to a jump in the price of gold. The spot price broke $280 per ounce, rising to$290 before falling back to \$287, said one South African dealer. A London-based dealer added that the volatility ocurred as hedge fund managers with short gold positions tried to take advantage of expected price increases resulting from Normandy’s sell-off. “But then the market realised that the sell-off had actually already taken place, probably over the last two weeks, so the price fell back again,” said the dealer. Explaining its position, Normany said; “Lower Australian and US interest rates and higher gold interest rates [the cost of borrowing gold] have significantly reduced the gold forward price compared to the spot price, making long-term hedging less attractive.” It added that 70% of its gold reserves are now exposed to movements in the spot price. The development came just one day before a declared deadline for bids to be in for Normandy. AngloGold, part of the London-listed Anglo American group, and a keen hedger, today conceded defeat in the takeover, clearing the way for Denver-based Newmont, a strong anti-hedger, to win control of Normandy. Earlier in the bid battle, Newmont said it would unwind Normandy’s entire forwards book should it gain control of the Australian group. Bobby Godsell, chairman and chief executive of AngloGold, today hinted that he is still on the lookout for acquisitions: “AngloGold’s long-term strategy formulation will continue to be informed by our belief in the importance of value-creating consolidation." • LinkedIn • Save this article • Print this page #### 7 days in 60 seconds ###### VM change, Libor fallback and DTCC blockchain The week on Risk.net, November 10-16, 2017 #### Most read on Risk.net You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial. ##### You are currently on corporate access. To use this feature you will need an individual account. If you have one already please sign in. . Alternatively you can request an indvidual account here:
2017-11-17 17:49:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1754118651151657, "perplexity": 7538.4349041101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934803848.60/warc/CC-MAIN-20171117170336-20171117190336-00219.warc.gz"}
https://socratic.org/questions/how-do-you-write-word-phrases-for-these-expressions-7h
# How do you write word phrases for these expressions: 7h? The word phrase for this would be "Seven times $h$".
2019-12-06 15:30:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5551853775978088, "perplexity": 7034.783684611669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488870.33/warc/CC-MAIN-20191206145958-20191206173958-00425.warc.gz"}
https://elements.science/research/working-area-2/
# Working Area 2: From collisions of heavy ions tocollisions of neutron stars One of the main goals of ELEMENTS is the understanding of bulk properties of nuclear matter under extreme conditions. At high temperatures or high net baryon densities a new state of matter, a quark-gluon plasma, is formed. The nature of the transition between the ordinary hadron gas phase and the QGP phase is still under investigation, as well as the detailed properties of those phases. Heavy-ion collisions at varying beam energies allow to access large regions of this phase diagram of strongly-interacting matter. Within neutron star mergers very high densities and moderate temperatures are reached. Detailed dynamical modeling To gain insights on the properties of matter from observables in gravitational-wave signals and heavy-ion reactions, detailed dynamical evolution models are required. This is the core task of work area 2 to advance the description of heavy-ion collisions and neutron star mergers within magneto-hydrodynamics and transport approaches. The first neutron star merger events (GW170817 and GW190425) are providing first constraints on maximum masses, radii and tidal deformabilities. In heavy-ion collisions, the main observables include fluctuations and correlations of final particles, the vorticity measured through polarized particles, electromagnetic radiation as well as light nuclei production. The following milestones have been formulated in work area 2: • Extract stringent constraints on the nuclear equation of state of QCD matter at high net baryon densities based on HADES experimental data with a hadronic transport approach • Construct a theoretical framework for spin-MHD and assess the impact on dynamics of heavy-ion collisions • Build a most comprehensive set of merger models and calculate their impact on nucleosynthesis yields and kilonova light curve • Achieve a quantitative understanding of the high density equation of state on the observables of binary neutron star mergers Work area 2 will build a common framework to interpret neutron star mergers and heavy-ion collisions. One particular focus is the consistency of the equation of state between observables from neutron star mergers and heavy-ion reactions. PIs in this work area include: A. Arcones, T. Aumann, A. Bauswein, H. Elfner, T. Galatyuk, G. Martinez-Pinedo, A. Obertelli, D. Rischke, L. Rezzolla, L. Sagunski, J. Stroth
2022-01-27 23:44:46
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8245251774787903, "perplexity": 2554.067914248128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305317.17/warc/CC-MAIN-20220127223432-20220128013432-00687.warc.gz"}
https://www.questaal.org/docs/numerics/jigsaw/
# Jigsaw Puzzle Orbitals ### Preliminaries Questaal’s basis functions consist of augmented waves : smooth envelope functions with holes punched out around atoms the smooth envelope replaced (“augmented”) by numerical solutions of the Schrodinger equation. These numerical solutions are very accurate, so basis incompleteness mostly originates from the envelope functions. This page is concerned with a new construction for them, the Jigsaw Puzzle Orbitals. Similar to Questaal’s traditional basis set, Jigsaw Puzzle orbitals are constructed out of smooth Hankel functions and generalised Gaussian orbitals. The reader is advised to read at least the introductory part of this documentation describing smooth Hankel functions. ### Special properties of Jigsaw Puzzle Orbitals Jigsaw Puzzle Orbitals (JPOs) have a number of highly desirable properties of a basis set. • They are short ranged and atom centred, with pure $Y_{lm}$ character on the augmentation boundary where they are centred. Thus, they serve as good projectors for special subspaces, e.g. correlated atoms where the correlation is strong in a particular l channel such as transition metals and f shell metals. • They are smooth everywhere. This greatly facilitates their practical implementation. • They have an exponentially decaying asymptotic form far from a nucleus, making them short range. • They are tailored to the potential. Inside or near an augmentation site, the Schrödinger equation is carried by almost entirely by a single function. Thus they form a nearly optimum basis set to solve the Schrödinger equation over a given energy window. In the figure on the left, the JPO envelope functions are shown for s and p orbitals in a 1D model with two atom centers. The solid parts depict the interstitial region, where the envelope functions carry the wave function. Dashed lines depict augmented regions where the envelope is substituted by partial waves — numerical solutions of the Schrödinger equation. It is nevertheless very useful that the envelope functions are smooth everywhere, since sharply peaked envelope functions require many plane waves to represent the smooth charge density $n_0$ At points where the envelope functions and augmented functions join, the function value is unity on the head site ($V_{1p}$ or $V_{1s}$ on the left, and $V_{2p}$ or $V_{2s}$ on the right) and zero on the other site. (By unity we refer to the radial part of the partial wave; the full wave must be multiplied by $Y_{lm}$, which for p orbitals is $\pm 1$ depending on whether the point is right or left of centre.) Moreover, the kinetic energy is tailored to be continuous at the head site and vanish at the other site. These two facts taken in combination are very important. Consider the Schrödinger equation near an augmentation point. Inside the augmentation region, the partial wave is constructed numerically, and is very accurate. It is not quite exact: the partial wave is linearized, and the potential which constructs the partial wave is taken to be the spherical average of the actual potential. But it is well established that errors are small: the LAPW method, for example, considered to be the gold standard basis set for accuracy, makes the same approximation. Since the kinetic energy is continuous across the boundary, the basis function equally well describes the Schrödinger on the other side of the boundary, at least very near the boundary. This alone is not sufficient to make the basis set accurate. Tails in some channel from heads centred elsewhere will contribute to the eigenfunction. They can “contaminate” the accurate solution of the head. However, consider the form of the Schrödinger equation: By construction both value and kinetic energy of all basis functions $V_{Rl}$ vanish except for the single partial wave that forms the head. Thus any linear combination of them will yield a nearly exact solution of the Schrödinger equation locally. In the 1D model described above, a the JPO basis was applied to double-exponential potential well shown in black in the Figure on the right. The kinetic energy of a traditional smooth Hankel (green) shows a discontinuity at the augmentation boundary; with JPO’s the discontinuity disappears (red). The JPO kinetic energy is everywhere very close to the exact solution (the exact potential and kinetic energies lie on top of each other). In many respects, JPOs are nearly ideal basis functions: they are close to being as compact as possible for solving the Schrödinger equation in a given energy window. JPO’s have two important drawbacks, however: • They are complex objects, complicating their augmentation and assembling of matrix elements. We can however make use analytic properties of JPO envelope functions to greatly ameliorate the increase in computational cost in making matrix elements. The extra cost is not important provided it does not dominate the total cost. • There is no analytic form for products of two of them, as there are for plane waves and Gaussian orbitals. However the same technology we have built for Questaal’s traditional orbitals based on smooth Hankel functions. can be applied here. ### Construction of Jigsaw Puzzle Orbitals* JPOs are tangentially related to Andersen’s tight-binding transformation and NMTO method.2 This transformation makes conventional LMTO’s short ranged (see Fig. 5 in the Questaal methods paper, Ref. 3). It makes both conventional LMTO’s and smooth Hankel functions short range in the same way, but Questaal’s functions are more accurate, shorter ranged, and suited for modern full-potential methods. NMTOs do establish proof-of-principle: they are very accurate over the relevant energy window (roughly $E_{F}\pm 1\,$Ry). To date we have succeeded in rotating Questaal’s standard basis functions (convolutions of Hankel and Gaussian functions) into a screened form; see Section 3.12 in the Questaal methods paper, Ref. 3) This basis set spans the same hilbert space as Questaal’s traditional basis — good, but not optimal. The figure on the left shows screened $d$ envelope functions, $xy$, $yz$, $3z^2{-}1$, $xz$, and $x^2{-}y^2$, for a zincblende lattice. They resemble a traditional Slater-Koster form (having pure $lm$ character), making easy identification with an atomic orbital. The figure on the right shows the bandgap in NiO calculated in the Quasiparticle Self-Consistent GW (QSGW) approximation, comparing Questaal’s traditional basis and a screened form. The bandgap is plotted as a function of the cutoff $\mathbf{|R'-R|}$ in the QSGW self-energy $\Sigma^{0}_{\mathbf{RR}'}$ for traditional envelope functions (blue) and screened basis set (orange). The first point contains onsite terms only, the second adds first neighbours etc. The conventional basis shows erratic behaviour until at least the fourth neighbours are present. JPO’s converge quickly, because their range is smaller than the range of the physical $\Sigma^{0}({\mathbf{r,r}'})$, so what the evolution of the gap is a reflection of the nonlocality in the $\Sigma^{0}({\mathbf{r,r}'})$. It is seen to be relatively short-ranged, but not confined to a single site which theories such as LDA+U and LDA+DMFT assume. The final step, making the kinetic energy everywhere continuous, slightly perturbs the shape and should significantly improve its accuracy. ### References and Other Resources 1. Many mathematical properties of smoothed Hankel functions and the $H_{kL}$ family are described in this paper: E. Bott, M. Methfessel, W. Krabs, and P. C. Schmid, Nonsingular Hankel functions as a new basis for electronic structure calculations, J. Math. Phys. 39, 3393 (1998) 2. O. K. Andersen et al., “Electronic Structure and Physical Properties of Solids: The Uses of the LMTO Method,” in Lecture notes in physics, Vol. 535 (Springer-Verlag, Berlin, 2000) H. Dreysse, ed. 3. Dimitar Pashov, Swagata Acharya, Walter R. L. Lambrecht, Jerome Jackson, Kirill D. Belashchenko, Athanasios Chantis, Francois Jamet, Mark van Schilfgaarde, Questaal: a package of electronic structure methods based on the linear muffin-tin orbital technique, Comp. Phys. Comm. 249, 107065 (2020). This remark must be qualified. A basis set that is nearly perfect in describing a local potential may still be incomplete in important ways. On particularly important example arises in the context of many-body theory. A cusp appears in the two-particle wave function $\psi(\mathbf{r}_1,\mathbf{r}_2)$ as $\mathbf{r}_1\rightarrow\mathbf{r}_2$, because of the singularity in the coulomb potential $|\mathbf{r}_1-\mathbf{r}_2|^{-1}$. Smooth functions have difficulty approximating this cusp. Treatment of forces and phonons are another context: as the nucleus moves, the basis must shift with it because the partial waves are tailored only to solve the Schrodinger equation when the nuclear potential is not displaced. These are sometimes called “Pulay” corrections. The QSGW bandgap is slightly larger than 5 eV, as shown in the original paper describing Quasiparticle Self-Consistent GW (Phys. Rev. Lett. 93}, 126406 (2004)), larger than the experimental gap of about 4.3 eV. Including ladder diagrams in W reduces this gap close to the experiment, as will be shown in a paper by Brian Cunningham and co-workers.
2023-01-31 08:05:56
{"extraction_info": {"found_math": true, "script_math_tex": 25, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6752128601074219, "perplexity": 1113.4407033796979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499845.10/warc/CC-MAIN-20230131055533-20230131085533-00257.warc.gz"}
http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199298686.001.0001/acprof-9780199298686-appendix-5
## Nasr Ghoniem and Daniel Walgraef Print publication date: 2008 Print ISBN-13: 9780199298686 Published to Oxford Scholarship Online: May 2008 DOI: 10.1093/acprof:oso/9780199298686.001.0001 Show Summary Details Page of PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 20 January 2017 # (p.1083) APPENDIX B SHOULD ALL CRYSTALS BE BCC? Source: Instabilities and Self-Organization in Materials Publisher: Oxford University Press We already discussed the fact that Landau free energy is a powerful tool for the analysis of symmetry-breaking phase transitions. It may be also be applied to the freezing transition of monatomic fluids of spherical particles and should thus mimic the liquid solid transition [1033]. Its minimum should thus determine the selected crystalline structure at given temperature and density. Consider the Landau expansion of the free energy, near the liquid phase, in terms of density ρ: (B.1) $Display mathematics$ where Φ0 is the free energy of the liquid phase. The quadratic term is written as (B.2) $Display mathematics$ where ρ q are the Fourier components of the density. In isotropic systems A(q) only depends on wavenumber |q|. Furthermore, near the solid–liquid phase transition, A(q) is minimum for some well-defined wavevenumber q c and one may write (B.3) $Display mathematics$ The order parameter is thus associated with an irreducible representation of the rotational group described by the sphere of radius q c, and (B.4) $Display mathematics$ where Q are critical wavevectors of length q c. It is thus independent of explicit combinations of ρ Q. The cubic invariant on the sphere may be cast in the following form (B.5) $Display mathematics$ where the wavevectors Q i have to form equilateral triangles to get nonvanishing contributions. Since $ρ Q = ρ − Q ∗$ this requires at least three $ρ Q i$ in the order parameter which may thus be written as (p.1084) (B.6) $Display mathematics$ As already discussed in the context of pattern selection in nonequilibrium systems, the simplest case is the triangle, n = 3, end the order parameter is (B.7) $Display mathematics$ and $| Φ 3 h e x | = 2 B ρ q c 3 / 3 3$. In three-dimensional space, the corresponding structure is rod-like with two-dimensional triangular or honeycomb periodicities. When n = 6, triangles can be arranged to form octahedrons, and (B.8) $Display mathematics$ and $| Φ 3 b c c | = 4 B ρ q c 3 / 3 6$. The corresponding lattice structure in real space is BCC. Note that $| Φ 3 b c c | > | Φ 3 h e x |$, favoring BCC lattices. More intricate wavevectors combinations also give nonvanishing contributions to cubic invariants, such as dodecahedrons, giving n = 15. The corresponding spatial structure is based on icosahedrons and has five-fold symmetry. It is thus not periodic but only quasi-periodic. Furthermore $| Φ 3 b c c | > | Φ 3 i c o s |$, and FCC crystal lattices are only favored by third-order terms. Other structures, such as FCC crystallattices (n = 4), give no cubic contributions. Their study requires thus the analysis of the next higher-order invariant, Φ4, which may be written as (B.9) $Display mathematics$ Since the wavevectore must form closed quadrangles to get nonvanishing contributions, C(Q i) may depend on two independent angles between wavevectors on the sphere. As a result, Φ4 may depend on specific materials properties, such as bond angles, packing properties, electronic structure, etc. At large densities, thus far away from the transition, these effects may dominate over universal features and favor other structures than FCC crystal lattices. The previous discussion leads thus to the following conclusion. In the case of weak first-order transitions, which occur for low densities, and as long as specific forces are not too important, universal, model independent effects favor the formation of BCC crystal lattices. However, when specific forces are dominant, other structures, such as FCC lattices could emerge through a second-order mean field transition. Such a transition implies the existence of a solid–liquid critical point. This is in contradiction with Landau argument saying that such transition may not (p.1085) be continuous since liquid and solid phases have different symmetry groups. However, agreement with Landau theory is recovered when fluctuations are taken into account. Effectively, for rotationally symmetric liquids, fluctuations dispersion relation, $ω q ∝ T − T c T c + d 0 ( q 2 − q c 2 ) 2$, only depends on the length of the wavevector q and not on its orientation. For deviations from critical wavevectors, it has thus a one-dimensional behavior. Indeed, on writing q = q c1x + k (the orientation of the x-axis being arbitrary), one has $ω k ∝ T − T c T c + 4 d 0 q c 2 k x 2 + …$ As shown by Brazovskii [1034], this implies diverging fluctuations which transform the transition from continuous to first-order. Brazovskii’s argument may be sketched as follows in the simplest case of transitions to one-dimensional periodic structures in d-dimensional systems. The corresponding (Φ2, Φ4) Landau free energy, including noncritical fluctuations, may be written as (B.10) $Display mathematics$ and minimization equations for densities $ρ q c$, with qc = q c1x, including two-point correlation functions of fluctuations ρ q, with q = qc + k, are (B.11) $Display mathematics$ with (B.12) $Display mathematics$ and (B.13) $Display mathematics$ These equations may be solved iteratively and, for T < T c, one has (B.14) $Display mathematics$ Hence the transition is shifted downwards at T = T c(1 − Γ2/3) and becomes first order.
2017-01-20 07:55:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 14, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8309921026229858, "perplexity": 1642.9828199838057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00577-ip-10-171-10-70.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/4079112/comparison-of-definitions-for-functions-of-bounded-variation/4088993
# Comparison of definitions for Functions of Bounded Variation I have been trying to understand the functions of bounded variation and I came across the following definitions Defintion 1: A function $$f:\mathbb{R^d} \rightarrow \mathbb{R}$$ is of bounded variation iff $$\begin{split} \operatorname{TV}(f)&:=\int\limits_{\mathbb{R}^{d-1}}\mathcal{TV}(f(\cdot,x_2,\cdots,x_d))dx_2 \cdots dx_m +\cdots +\\ & \quad+\cdots+\int\limits_{\mathbb{R}^{d-1}}\mathcal{TV}(f(x_1, \cdots, x_{d-1},\cdot)) dx_1\cdots dx_{d-1} < \infty. \end{split}$$ where, for $$g:\mathbb{R} \rightarrow \mathbb{R}$$ $$\mathcal{TV}(g):=\sup \left\{\sum\limits_{k=1}^N{\left|g(\xi_k)-g(\xi_{k-1})\right|}\right\}$$ and supremum is taken over all $$M \geq 1$$ and all partitions $$\{\xi_1,\xi_2,....,\xi_N\}$$ of $$\mathbb{R}.$$ Defintion 2: A function $$f:\mathbb{R^d} \rightarrow \mathbb{R}$$ is of bounded variation iff $$\operatorname{TV}(f)= \sup \left\{\,\int\limits_{\mathbb{R}^m}f \operatorname{div}(\phi): \phi \in C_c^1(\mathbb{R^d})^d, \|\phi\|_{L^{\infty}} \leq 1\, \right\} < \infty.$$ Clearly if $$f$$ is of bounded variation in the sense of definition 2, it may not be of bounded variation in the sense of definition 1. In this regard, I have the following doubts. 1. If $$f$$ satisfies definition 1, then do we have $$f$$ satisfies definition 2? (I felt so but could not prove it rigorously). 2. If [1] is true then $$\operatorname{TV}(f)$$ calculated by definition 1 and definition 2 are they equal? 3. If $$f$$ satisfies definition 2, does there exist a function $$g:\mathbb{R}^d \rightarrow \mathbb{R}$$ a.e equal to $$f$$ such that $$g$$ satisfies definition 1? If so how to prove it? P.S. : I have read somewhere that 3 is true in one dimension and in-fact we can find $$g$$ which is right continuous. But I could not find the rigorous proof and also I could not find any such result in multi-d. • Are both definitions equivalent for $d = 1$? In higher dimensions, I'd try using $\Phi = \phi_1e_1+\cdots\phi_ne_n$, with $\phi_i$ a suitable choice. Can you prove something under the assumption that $f$ is smooth? – user90189 Apr 2 at 13:01 The questions, despite looking as a representation problem in functional analysis, are much deeper as they bring out the history of the topic involved, notably $$BV$$-functions and the reasons why the customary definition adopted for the variation of a multivariate function is definition 2 above. And as thus the answers below needs to indulge a bit on this history: said that, let's start. 1. If $$f$$ satisfies definition 1, then do we have $$f$$ satisfies definition 2? (I felt so but could not prove it rigorously). No: the two definitions are in general not equivalent. The main problem is that definition 1 is not invariant respect to coordinate changes for all $$L^1$$ functions: in particular, there exists functions for which the value of the variation $$\mathrm{TV}(f)$$ depend on the choice of coordinate axes, as shown by by Adams and Clarkson ([1], pp. 726-727) with their counterexample. Precisely, by using the ternary set, they construct a function of two variables such that the total variation according to definition 1 passes from a finite value to an infinite one simply by a rotation of angle $${\pi}/{4}$$ of the coordinate axes. However, for particular classes of functions, the answer is yes: this happens for example for continuous functions, as Leonida Tonelli was well aware of when he introduced definition 1. We'll see something more in the joint answer to the second and third questions. 1. If [1] is true then $$\operatorname{TV}(f)$$ calculated by definition 1 and definition 2 are they equal? 2. If $$f$$ satisfies definition 2, does there exist a function $$g:\mathbb{R}^d \rightarrow \mathbb{R}$$ a.e equal to $$f$$ such that $$g$$ satisfies definition 1? If so how to prove it? Since definition 1 is not coordinate invariant in $$L^1$$ while definition 2 is, for questions 2 and 3 the answer is no. However, things change if, instead of the total (pointwise) variation $$\mathcal{TV}$$, one considers the essential variation defined as $$\newcommand{\eV}{\mathrm{essV}} \eV(g):=\inf \left\{\mathcal{TV}(v) : g=v\;\; L^1\text{-almost everywhere (a.e.) in }\Bbb R\right\}$$ (see [2], §3.2, p. 135 or [4], §5.3, p. 227 for an alternative definition involving approximate continuity, closer to the original Lamberto Cesari's approach). Then you have the following theorem Theorem 5.3.5 ([4], pp. 227-228) Let $$f\in L^1_\text{loc}(\mathbb{R}^n)$$. Then $$f\in BV_\text{loc}(\Bbb R^n)$$ if and only if $$\int\limits_{R^{n-1}}\eV_i\big(f(x)\big)\,{\mathrm{d}} x_1\cdots{\mathrm{d}}x_{i-1}\cdot {\mathrm{d}}x_{i+1}\cdots {\mathrm{d}}x_n< \infty\quad \forall i=1,\ldots,n$$ where • $$\eV_i\big(f(x)\big)$$ is the essential variation of the one dimensional sections of $$f$$ along the $$i$$-axis and • $$R^{n-1}\subset \Bbb R^{n-1}$$ is any $$(n-1)$$-dimensional hypercube. This result, apart from its intrinsic interest, is valuable since it allows two prove a variation of the sought for result: namely $$\sum_{i=1}^n \int\limits_{R^{n-1}}\eV_i\big(f(x)\big)\,{\mathrm{d}} x_1\cdots{\mathrm{d}}x_{i-1}\cdot {\mathrm{d}}x_{i+1}\cdots {\mathrm{d}}x_n =\sup \left\{\,\int\limits_{\mathbb{R}^m}f \operatorname{div}(\phi): \phi \in C_c^1(\mathbb{R^d})^d, \|\phi\|_{L^{\infty}} \leq 1\, \right\}\label{1}\tag{V}$$ The proof of \eqref{1} follows from the proof of theorem 5.3.5 above in that the method is the same but, instead of the single $$i$$-th axis ($$i=1,\ldots,n$$) essential variation, the sum of the $$n$$ essential variations is considered. Also, both sides of equation \eqref{1} are lower semicontinuous thus, given any sequence of $$BV$$ functions $$\{f_j\}_{j\in\Bbb N}$$ for which they converge to a common (finite) value, it is possible to find a subsequence converging to a $$BV$$ function $$f$$: simply stated, the supremum is attained for the limit function of the subsequence and thus it is a maximum. Thus question 2 and 3 have an affirmative answer if the essential variation is considered instead of the (pointwise) total variation. Notes • Definition 1 defines the so called "total variation in the sense of Tonelli", and was introduced by Leonida Tonelli only for continuous functions, since the problem of non invariance of the value of variation respect to a change in coordinate axes, pointed out by Adams and Clarkson ([1], pp 726-727), does not exists in this class. The multidimensional total variation defined by using the essential variation, i.e. $$\mathrm{TV}(f)=\sum_{i=1}^n \int\limits_{R^{n-1}}\eV_i\big(f(x)\big)\,{\mathrm{d}} x_1\cdots{\mathrm{d}}x_{i-1}\cdot {\mathrm{d}}x_{i+1}\cdots {\mathrm{d}}x_n$$ is called the "total variation in the sense of Tonelli and Cesari" and was introduced by Lamberto Cesari in [3], pp. 299-300 to overcome the known limitation of definition 1. • I predated reference [1] from the answer by @Piotr Hajlasz to this Q&A: as I pointed out there, definition 1 is the original definition of bounded variation for functions of several variables given by Lamberto Cesari in 1936. Definition 2 was introduced later by Mario Miranda in the early sixties of the 20th century. References [1] C. Raymond Adams, James A. Clarkson, "Properties of functions $$f(x,y)$$ of bounded variation" (English), Transactions of the American Mathematical Society 36, 711-730 (1934), MR1501762, Zbl 0010.19902. [2] Luigi Ambrosio, Nicola Fusco, Diego Pallara, Functions of bounded variation and free discontinuity problems, Oxford Mathematical Monographs, New York and Oxford: The Clarendon Press/Oxford University Press, New York, pp. xviii+434 (2000), ISBN 0-19-850245-1, MR1857292, Zbl 0957.49001. [3] Lamberto Cesari, "Sulle funzioni a variazione limitata" (Italian), Annali della Scuola Normale Superiore, Serie II, 5 (3–4), 299–313 (1936), JFM , MR1556778, Zbl 0014.29605 [4] William P. Ziemer, Weakly differentiable functions. Sobolev spaces and functions of bounded variation. Graduate Texts in Mathematics, 120. New York: Springer-Verlag, pp. xvi+308, 1989, ISBN: 0-387-97017-7, MR1014685, Zbl 0692.46022 The answer to all three questions is yes, but there are some subtleties. In Definition 2 if you modify the function on a set of measure zero, $$TV$$ does not change. So you need to keep into account sets of measure zero in Definition 1 when $$d>1$$. The idea is to change $$\mathcal{TV}$$ and instead of using arbitrary partitions you only use partitions made of points which are Lebesgue points of your function. This is is called the essential pointwise variation of the function. You can find these results in Leoni The case $$d=1$$ is Theorem 7.3 and is exactly what you wrote. For $$d>1$$ the result you want is due to Serrin and is given in Theorem 14.20 using the essential pointwise variation instead of the pointwise variation. For $$d=1$$ you can also look at Evans and Gariepy Theorem 5.21. Unfortunately none of the proofs are easy.
2021-08-01 10:42:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 56, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9420037865638733, "perplexity": 197.38881642994494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154175.76/warc/CC-MAIN-20210801092716-20210801122716-00222.warc.gz"}
https://blender.stackexchange.com/questions/163111/image-file-not-showing-in-file-explorer
# Image file not showing in file explorer I try to change an image texture, the dialogue box opens so I can view the file explorer, but the image I downloaded (a .jpeg off google) it is no where to be found, as if isn't there. I have a folder for all my blender files, and other image files load just fine. To disable the filters (in 2.81, might be different in earlier versions), at the top of the Blender file explorer, there should be a filter icon. Click it and uncheck the "Filter" box. A test to see if .jpeg isn't recognized by Blender is just to change the file's extension to .jpg and see if it pops up after refreshing.
2020-07-12 17:31:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41687917709350586, "perplexity": 3620.6483288042623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657138752.92/warc/CC-MAIN-20200712144738-20200712174738-00304.warc.gz"}
https://math.portonvictor.org/2018/07/14/new-easy-theorem/
# New easy theorem I have added a new easy (but unnoticed before) theorem to my book: Proposition $(\mathsf{RLD})_{\mathrm{out}} f\sqcup (\mathsf{RLD})_{\mathrm{out}} g = (\mathsf{RLD})_{\mathrm{out}}(f\sqcup g)$ for funcoids $f$, $g$.
2021-10-25 07:12:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9658371806144714, "perplexity": 6324.525293206661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587655.10/warc/CC-MAIN-20211025061300-20211025091300-00335.warc.gz"}
http://mathoverflow.net/questions/58320/intersections-of-conjugates-of-lie-subgroups
# Intersections of conjugates of Lie subgroups Let $G$ be a closed, connected Lie group, and let $H$ be a closed (and therefore Lie) subgroup. There is a natural action of $G$ on the space of left cosets $G/H$, for which the stabiliser of $aH$ is the conjugate subgroup ${}^aH:=aHa^{-1}$. Now let $G$ act diagonally on $G/H\times G/H$. The stabiliser of $(aH,bH)$ is the intersection ${}^aH\cap {}^bH$ of conjugate subgroups. My question is, what can be said about ${}^aH\cap {}^bH$? I know that ${}^aH={}^bH$ if and only if $ab^{-1}\in N(H)$, the normaliser of $H$ in $G$. Beyond this I couldn't find much via Google. For instance can ${}^aH\cap {}^bH$ be trivial? Or must it be a conjugate of $H$? Apologies if this is too elementary. Edit: Thanks for the answers, which show that not much can be said at this level of generality. Now I am looking at a specific example - the icosahedral group $I\cong A_5$ inside $SO(3)$ (where the space of left cosets is the Poincaré sphere). This subgroup has the properties that $I$ is finite, $N(I)=I$ and $I/[I,I]$ is trivial. Does this allow me to conclude that conjugates of $I$ are either equal or intersect in the identity? More generally, what can be said if I add the assumption $N(H)=H$ to the original question? Edit 2: I've now asked about the icosahedral group in another question. - Excuse me, but in the last line, shouldn't there be ${}^aH\cap{}^bH$ instead of ${}^aH={}^bH$ ? or not? – Giuseppe Tortorella Mar 13 '11 at 9:56 Yes, thanks, duly edited. – Mark Grant Mar 13 '11 at 10:00 If you consider groups whose cardinality is less than or equal to $\aleph_0$ and endow them with the discrete topology, then you get 0-dimensional Lie groups. Now take $G=S_n$ the symmetric group of $\{1,\ldot,n\},$ and $H_i$ the subgroup of the permutations fixing $i,$ for $i=1,\ldots,n$ these are obviously conjugate each other. Trivially $H_1\cap H_2$ is not conjugate to $H_1$. Excuse me if this answer is not what you wanted. – Giuseppe Tortorella Mar 13 '11 at 10:19 Thanks Giuseppe. In my applications $G$ is connected, but your comment shows I was being too optimistic (as does Jack's connected answer). – Mark Grant Mar 13 '11 at 13:25 ## 3 Answers Suppose G=SL(2,C) and let H be the stabilizer of a line (so a Borel subgroup). The matrix $$\begin{pmatrix}a&b\\\\c&d\end{pmatrix}$$ in G acts on projective space by $$z \mapsto \frac{az+b}{cz+d}$$ The stabilizer of ∞ is the matrices with c=0, a Borel subgroup. The stabilizer of both ∞ and 0 is the matrices with b=c=0, a maximal torus. In particular, a two point stabilizer is abelian (the intersection of two Borel subgroups), and a Borel subgroup is non-abelian. Hence they are not isomorphic. This is just a connected version of Giuseppe's answer. If you consider the Borel subgroup to be the Lie group itself, then you get an example where the intersection of the conjugates is trivial. If G is the group of 2×2 matrices with c=0 and d=1 acting on projective space, then the stabilizer of 0 has b=0, and the stabilizer of both 0 and 1 has a=1 and b=0. In particular, G=AGL(1) and H is a maximal torus, and the intersection of two conjugates of H is the identity. When the intersection is the identity, this is called being sharply two-transitive or having a regular stabilizer. - @Mark Grant: More exotic intersections are certainly possible: Take G=Alt(6) wr Sym(2), H=Alt({1..6})×Alt({7..11}), then H is self-normalizing. The conjugates of H are just point stabilizers, and the intersection of conjugates are just 2-point stabilizers. The stabilizer of 11, 12 is Alt({1..6})×Alt({7..10)}≅A6×A4. The stabilizer of 6, 12 is Alt({1..5})×Alt({7..11})≅A5×A5. Similar ideas happen in any wreath product, but this one was chosen so that H is self-normalizing (so that a point stabilizer moves all other points) and perfect. I don't know a connected version of the wreath product. – Jack Schmidt Mar 18 '11 at 16:43 The motivation for the question is unclear to me, but it seems much too broad to have an interesting answer even if you limit consideration to connected algebraic subgroups of a general linear group (complex or real). As Jack points out, there are easy examples showing various possible outcomes when you intersect a group with a conjugate. The structure theory of semisimple or other Lie/algebraic groups has been extensively studied and makes it easy to find lots of further examples of this type. For instance, a Borel subgroup intersects a conjugate in at least a maximal torus when the ambient group is reductive; the intersection may be precisely a maximal torus if the Borel subgroups are "opposite". Concretely, this is seen when intersecting the upper triangular and lower triangular matrices in the $n \times$ matrix group: the result is the group of nonsingular diagonal matrices. At another extreme, intersecting this diagonal group with a typical conjugate by an upper triangular unipotent matrix will typically produce a finite group. - I disagree with your premise. Clearly, if you have a transitive action of a Lie group, it is of interest to study the fixator of two points. This motivates the study of $G/H$ and the intersection of two conjugates of $H$. In the case of a two-transitive action, there is even a very interesting answer available: these actions have been classified by Tits (and Borel). See L. Kramer, Two-transitive Lie groups, for a beautiful exposition and simplified proof. – Guntram Mar 13 '11 at 14:36 @Guntram: There are certainly such well-focused special cases which are interesting, but the question as asked is too broad and the third paragraph there illustrates the need for cautionary examples. – Jim Humphreys Mar 13 '11 at 17:54 Excuse my misunderstanding. Please accept in substitution a connected example. Let $G=SE(2)$ be the special euclidean group of the plane, and $H_x$ the subgroup stabilizer of $x$, for any $x\in\mathbb{R}^2$. These latter ones are conjugate each other, but $H_x\cap H_y$ is trivial for any pair of distinct points $x$ and $y$ in the plane. -
2016-02-13 13:00:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9037290215492249, "perplexity": 268.1993419112465}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701166650.78/warc/CC-MAIN-20160205193926-00043-ip-10-236-182-209.ec2.internal.warc.gz"}
https://ryleealanza.org/2021/06/09/A-Mathematical-Postcard.html
# A Mathematical Postcard 09 Jun 2021 I submitted a “mathematical postcard” (see below) to the Nearly Carbon Neutral Geometric Topology conference. The purpose of this post is to share the postcard and offer a little more context and explanation for it. I’m also giving a talk in said conference, but I’ll talk about that in a separate post. Here is the postcard: Let me say a little more about the space we’re talking about. Let $F = A_1*\dotsb*A_n*F_k$ be a free product of the groups $$A_1,\dotsc,A_n$$ with a free group of rank $$k$$. It follows from elementary Bass–Serre theory that $$F$$ acts on a (simplicial) tree $$T$$ with trivial edge stabilizers and vertex stabilizers each conjugate to some $$A_i$$. Let’s assume that the action is minimal, i.e. that there is no proper invariant subtree. Say two such trees $$S$$ and $$T$$ are equivalent if there is an $$F$$-equivariant homeomorphism from $$S$$ to $$T$$. It turns out that there are many inequivalent, minimal actions! Guirardel and Levitt define an Outer Space to parametrize them all (up to a finer notion of equivalence: isometry). It is a topological space on which the group of those outer automorphisms of $$F$$ which preserve the conjugacy classes of the $$A_i$$ acts. If the $$A_i$$ are freely indecomposable, for example, if they are finite, then this is all of $$\operatorname{Out}(F)$$. This Outer Space admits a spine, a subspace which is a simplicial complex and onto which it deformation retracts $$\operatorname{Out}(F)$$-equivariantly. Vertices of the spine are equivalence classes of minimal actions as above. Two vertices are connected by an edge when one can be obtained from the other by collapsing an orbit of edges. When $$n \ge 2$$, the dimension of this spine is $$n + 2k - 2$$. When each $$A_i$$ is finite, the spine is locally finite, and $$\operatorname{Out}(F)$$ acts with finite stabilizers and finite quotient. This is the bread-and-butter situation for geometric group theory, so outer automorphism groups of free products of finite and cyclic groups are interesting to study as a class because one can use the geometry of the spine of Outer Space to investigate them. Again by Bass–Serre theory, one can think of vertices of the spine via their quotient graphs of groups $$\mathcal{G}$$, as is happening in the postcard. These quotient graphs of groups will have first Betti number $$k$$ and $$n$$ vertices with nontrivial vertex group each isomorphic to one of the $$A_i$$. The combinatorial types of the examples in the postcard play a particularly special role in studying the spine of Outer Space. I call vertices combinatorially equivalent to the first and last examples thistles with $$n$$ prickles and $$k$$ petals (here $$n = 3$$ and $$k = 4$$). Vertices combinatorially equivalent to the middle examples are bugs with a head, $$n-1$$ legs and $$k$$ wings. One reason thistles and bugs are special is illustrated in the postcard: collapse Whitehead moves along with their cousins expand Whitehead moves, form a system of paths in the spine of Outer Space that connect any two thistles to each other. Collins and Zieschang prove a version of Whitehead’s algorithm for free products using in particular two families of Whitehead automorphisms that they call $$S$$-Whitehead automorphisms and $$J$$-multiple Whitehead automorphisms. The former correspond to expand Whitehead moves, and the latter to collapse Whitehead moves, and their result says that any two thistles in the spine of Outer Space may be connected by a path of Whitehead moves which has a very nice interplay with a norm one can put on thistles. Let me finish by indicating my interest in these paths: Vogtmann proved that the outer automorphism group of a free group of rank $$k$$ is what’s called simply connected at infinity for $$k \ge 5$$. Briefly, a space $$X$$ is simply connected at infinity if it has one end and for any compact subset $$K$$, there is a compact subset $$L$$ with $$L \supset K$$ such that any loop in $$X\setminus L$$ is nullhomotopic via a homotopy in $$X\setminus K$$. A group is simply connected at infinity if some (and hence any) simplicial complex on which it acts with finite stabilizers and finite quotient is simply connected at infinity. So $$\mathbb{R}$$ has two ends, $$\mathbb{R}^2$$ has one end but is not simply connected at infinity, and $$\mathbb{R}^{n}$$ is simply connected at infinity for $$n \ge 3$$; therefore $$\mathbb{Z}^n$$ is simply connected at infinity for $$n \ge 3$$. Vogtmann’s proof involves a careful understanding of the combinatorics of (expand) Whitehead moves in the spine of Outer Space (for a free group). I’m in the early stages of working on generalizing her result to free products of finite and cyclic groups. The general strategy of the proof will follow hers, but already I am seeing that the presence of collapse Whitehead moves will require their own analysis to handle.
2021-09-22 16:35:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.827085554599762, "perplexity": 237.61736733086553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057371.69/warc/CC-MAIN-20210922163121-20210922193121-00554.warc.gz"}
https://mathematica.stackexchange.com/questions/88240/surface-area-of-a-region-inside-a-region
# Surface Area of a Region inside a Region I need to find the surface area of a region which is within another region. The regions I'm working with are fairly complex and so I'm looking for a general solution, but I'll use a cylinder and tetrahedron as an example. Lets say I have the following regions: Container = Cylinder[{{0, 0, 0}, {0, 0, 2}}, 1]; MyRegion = Tetrahedron[{{0, 2, 0}, {0, -2, 0}, {-2, 0, 0}, {0, 0, 2.5}}]; I want to find the surface area of MyRegion which is contained by ContainerThe way I imagined doing this was using RegionIntersection, finding a boundary mesh, and then using Area. Then I would need to somehow figure out what area of the intersection is not in common with the boundary of MyRegion, but I'm not sure how to approach that or if that is even the best method. Any suggestions? Edit: My code generates a function that is used in an implicit region - so its not always the same. I've included a sample one below (sorry - I was trying to avoid posting such a large function). Technically I'm just interested in the surface area, but as I have it, the ImplicitRegion it will be a volume. f[x_,y_,z_]:=0.564936 Log[( 6.2 + 2 Sqrt[((-3.1 + y)^2 + (-1.67 + z)^2)^2 + (-3.1 - Abs[0. + x])^2] + 2 Abs[0. + x])/(-6.2 + 2 Sqrt[((-3.1 + y)^2 + (-1.67 + z)^2)^2 + (3.1 - Abs[0. + x])^2] + 2 Abs[0. + x])] + 0.564936 Log[(6.2 + 2 Sqrt[((3.1 + y)^2 + (-1.67 + z)^2)^2 + (-3.1 - Abs[0. + x])^2] + 2 Abs[0. + x])/(-6.2 + 2 Sqrt[((3.1 + y)^2 + (-1.67 + z)^2)^2 + (3.1 - Abs[0. + x])^2] + 2 Abs[0. + x])] - 0.564936 Log[(6.2 + 2 Sqrt[((0. + y)^2 + (0. + z)^2)^2 + (-3.1 - Abs[0. + x])^2] + 2 Abs[0. + x])/(-6.2 + 2 Sqrt[((0. + y)^2 + (0. + z)^2)^2 + (3.1 - Abs[0. + x])^2] + 2 Abs[0. + x])] + 0.564936 Log[(6.2 + 2 Sqrt[((-3.1 + x)^2 + (-1.67 + z)^2)^2 + (-3.1 - Abs[0. + y])^2] + 2 Abs[0. + y])/(-6.2 + 2 Sqrt[((-3.1 + x)^2 + (-1.67 + z)^2)^2 + (3.1 - Abs[0. + y])^2] + 2 Abs[0. + y])] + 0.564936 Log[(6.2 + 2 Sqrt[((3.1 + x)^2 + (-1.67 + z)^2)^2 + (-3.1 - Abs[0. + y])^2] + 2 Abs[0. + y])/(-6.2 + 2 Sqrt[((3.1 + x)^2 + (-1.67 + z)^2)^2 + (3.1 - Abs[0. + y])^2] + 2 Abs[0. + y])] - 0.564936 Log[(6.2 + 2 Sqrt[((-3.1 + x)^2 + (0. + z)^2)^2 + (-3.1 - Abs[0. + y])^2] + 2 Abs[0. + y])/(-6.2 + 2 Sqrt[((-3.1 + x)^2 + (0. + z)^2)^2 + (3.1 - Abs[0. + y])^2] + 2 Abs[0. + y])] - 0.564936 Log[(6.2 + 2 Sqrt[((3.1 + x)^2 + (0. + z)^2)^2 + (-3.1 - Abs[0. + y])^2] + 2 Abs[0. + y])/(-6.2 + 2 Sqrt[((3.1 + x)^2 + (0. + z)^2)^2 + (3.1 - Abs[0. + y])^2] + 2 Abs[0. + y])] + 0.564936 Log[(1.51 + 2 Sqrt[((-3.1 + x)^2 + (-3.1 + y)^2)^2 + (-0.755 - Abs[-0.915 + z])^2] + 2 Abs[-0.915 + z])/ (-1.51 + 2 Sqrt[((-3.1 + x)^2 + (-3.1 + y)^2)^2 + (0.755 - Abs[-0.915 + z])^2] + 2 Abs[-0.915 + z])] + 0.564936 Log[(1.51 + 2 Sqrt[((3.1 + x)^2 + (-3.1 + y)^2)^2 + (-0.755 - Abs[-0.915 + z])^2] + 2 Abs[-0.915 + z])/ (-1.51 + 2 Sqrt[((3.1 + x)^2 + (-3.1 + y)^2)^2 + (0.755 - Abs[-0.915 + z])^2] + 2 Abs[-0.915 + z])] + 0.564936 Log[( 1.51 + 2 Sqrt[((-3.1 + x)^2 + (3.1 + y)^2)^2 + (-0.755 - Abs[-0.915 + z])^2] + 2 Abs[-0.915 + z])/(-1.51 + 2 Sqrt[((-3.1 + x)^2 + (3.1 + y)^2)^2 + (0.755 - Abs[-0.915 + z])^2] + 2 Abs[-0.915 + z])] + 0.564936 Log[( 1.51 + 2 Sqrt[((3.1 + x)^2 + (3.1 + y)^2)^2 + (-0.755 - Abs[-0.915 + z])^2] + 2 Abs[-0.915 + z])/(-1.51 + 2 Sqrt[((3.1 + x)^2 + (3.1 + y)^2)^2 + (0.755 - Abs[-0.915 + z])^2] + 2 Abs[-0.915 + z])] - 0.564936 Log[(1.67 + 2 Sqrt[((0. + x)^2 + (0. + y)^2)^2 + (-0.835 - Abs[-0.835 + z])^2] + 2 Abs[-0.835 + z])/(-1.67 + 2 Sqrt[((0. + x)^2 + (0. + y)^2)^2 + (0.835 - Abs[-0.835 + z])^2] + 2 Abs[-0.835 + z])]; MyRegion=ImplicitRegion[f[x, y, z] >= 5, {{x, -5.3, 5.3}, {y, -5.3, 5.3}, {z, 0, 1.67}}] Container=Cylinder[{{0,0,0},{0,0,1.67}},5.3] • In principle, you could just do Area[RegionIntersection[RegionBoundary[myRegion], container]], but I'm not near a Mathematica instance right now to try it out and I recall it having trouble with intersections of regions of different dimensionalities. – user484 Jul 14 '15 at 20:07 • @Rahul Like you said, it doesn't quite work since RegionBoundary[MyRegion] is of a different dimension than Container – BenP1192 Jul 14 '15 at 21:39 If I understand the question correctly, one possibility is to decompose the surface of the Tetrahedron into four triangles, intersect each with Container, compute the Area of the resulting planar objects, and sum them. Area[RegionIntersection[Polygon[#], Container]] & /@ Subsets[{{0, 2, 0}, {0, -2, 0}, {-2, 0, 0}, {0, 0, 2.5}}, {3}] // Total (* 7.98614 *) Generalization to an ImplicitRegion The preceding toy problem could be solved without much difficulty, because the solid Tetrahedron could be replaced by a set of surfaces. As noted in the comment below, the OP would like to compute the area of a truncated ImplicitFunction. If it can be represented as a surface, ImplicitFuncton[f[x, y, x] == 0, {x, y, z}]]; then the Area of f inside the cylinder given in the Question is Area[ImplicitFuncton[f[x, y, x] == 0 && x^2 + y^2 < 1 && 2 > z > 0, {x, y, z}]]] As an example, Area[ImplicitRegion[x^2 + y^2 + z^2 - 4 == 0 && x^2 + y^2 < 1 && 2 > z > 0, {x, y, z}]] (* -4 (-2 + Sqrt[3]) π *) DiscretizeRegion for Complex Regions Consider a more complex surface, taken for the Applications section of the ImplicitRegion documentation. If it is truncated by a cylinder, r = ImplicitRegion[x^6 - 5 x^4 y z + 3 x^4 y^2 + 10 x^2 y^3 z + 3 x^2 y^4 - y^5 z + y^6 + z^6 - 1 == 0 && x^2 + y^2 < 1 && 1.2 > z > .8, {x, y, z}]; then Area[r] returns unevaluated. However, DiscretizeRegion combined with NIntegrate works well. dr = DiscretizeRegion[r] NIntegrate[1, {x, y, z} ∈ dr] (* 3.1817 *) The just added function in the question can be handled similarly. MyRegion = ImplicitRegion[ f[x, y, z] == 5, {{x, -5.3, 5.3}, {y, -5.3, 5.3}, {z, 0, 1.67}}] ; dmr = DiscretizeRegion[MyRegion] NIntegrate[1, {x, y, z} ∈ dmr] (* 25.1979 *) By the way, sometimes decreasing MaxCellMeasure improves integration accuracy by more finely zoning the surface. For instance, dr = DiscretizeRegion[r, MaxCellMeasure -> {"Area" -> .01}] improves discretization of the first surface in this section, slightly correcting its area to 3.18411. • In reality, I need a more general method since I use complicated Implicit Regions rather than tetrahedrons. Also, I'm trying to understand how what you did works. I tried the following hoping to get 3.14, but it returned 0. Do you know why? Area[RegionIntersection[Polygon[{{3, 0, 0}, {-2, -2, 0}, {-2, 2, 0}}], Sphere[{0, 0, 0}, 1]]] – BenP1192 Jul 14 '15 at 22:17 • @BenP1192 A Sphere is hollow. Use Ball instead. – bbgodfrey Jul 14 '15 at 22:25 • @BenP1192 I and others need to know more about your ImplicitRegions. Must they be volumes, or, can they be surfaces? In any case, please provide a sample in Mathematica format. – bbgodfrey Jul 14 '15 at 22:46 • My problem with this solution is that I believe it still includes the area of the top portion of the region. I need to find the area only where f==5, but limited by a container. This means that the region of which I need the surface area is not necessarily a closed region. In this case, the upper bound would be left open. Does that make sense? It's a little difficult to describe. – BenP1192 Jul 15 '15 at 14:02 • Nevermind. I wasn't seeing the picture clearly. You're correct, thanks! – BenP1192 Jul 15 '15 at 14:50
2021-01-20 12:32:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29030102491378784, "perplexity": 2538.282130089794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703520883.15/warc/CC-MAIN-20210120120242-20210120150242-00136.warc.gz"}
https://thatsmaths.com/page/2/
Archive Page 2 Zeroing in on Zeros Given a function ${f(x)}$ of a real variable, we often have to find the values of ${x}$ for which the function is zero. A simple iterative method was devised by Isaac Newton and refined by Joseph Raphson. It is known either as Newton’s method or as the Newton-Raphson method. It usually produces highly accurate approximations to the roots of the equation ${f(x) = 0}$. A rational function with five real zeros and a pole at x = 1. George Salmon, Mathematician & Theologian George Salmon (1819-1904) [Image: MacTutor] As you pass through the main entrance of Trinity College, the iconic campanile stands before you, flanked, in pleasing symmetry, by two life-size statues. On the right, on a granite plinth is the historian and essayist William Lecky. On the left, George Salmon (18191904) sits on a limestone platform. Salmon was a distinguished mathematician and theologian and Provost of Trinity College. For decades, the two scholars have gazed down upon multitudes of students crossing Front Square. The life-size statue of Salmon, carved from Galway marble by the celebrated Irish sculptor John Hughes, was erected in 1911. Next Wednesday will be the 200th anniversary of Salmon’s birth [TM171 or search for “thatsmaths” at irishtimes.com]. Spiralling Primes The Sacks Spiral. The prime numbers have presented mathematicians with some of their most challenging problems. They continue to play a central role in number theory, and many key questions remain unsolved. Order and Chaos The primes have many intriguing properties. In his article “The first 50 million prime numbers”, Don Zagier noted two contradictory characteristics of the distribution of prime numbers. The first is the erratic and seemingly chaotic way in which the primes “grow like weeds among the natural numbers”. The second is that, when they are viewed in the large, they exhibit “stunning regularity”. An English Lady with a Certain Taste Ronald Fisher in 1913 One hundred years ago, an English lady, Dr Muriel Bristol, amazed some leading statisticians by proving that she could determine by taste the order in which the constituents are poured in a cup of tea. One of the statisticians was Ronald Fisher. The other was William Roach, who was to marry Dr Bristol shortly afterwards. Many decisions in medicine, economics and other fields depend on carefully designed experiments. For example, before a new treatment is proposed, its efficacy must be established by a series of rigorous tests. Everyone is different, and no one course of treatment is necessarily best in all cases. Statistical evaluation of data is an essential part of the evaluation of new drugs [TM170 or search for “thatsmaths” at irishtimes.com]. ToplDice is Markovian Many problems in probability are solved by assuming independence of separate experiments. When we toss a coin, it is assumed that the outcome does not depend on the results of previous tosses. Similarly, each cast of a die is assumed to be independent of previous casts. However, this assumption is frequently invalid. Draw a card from a shuffled deck and reveal it. Then place it on the bottom and draw another card. The odds have changed: if the first card was an ace, the chances that the second is also an ace have diminished. The curious behaviour of the Wilberforce Spring. The Wilberforce Spring (often called the Wilberforce pendulum) is a simple mechanical device that illustrates the conversion of energy between two forms. It comprises a weight attached to a spring that is free to stretch up and down and to twist about its axis. Wilberforce spring [image from Wikipedia Commons].} In equilibrium, the spring hangs down with the pull of gravity balanced by the elastic restoring force. When the weight is pulled down and released, it immediately oscillates up and down. However, due to a mechanical coupling between the stretching and torsion, there is a link between stretching and twisting motions, and the energy is gradually converted from vertical oscillations to axial motion about the vertical. This motion is, in turn, converted back to vertical oscillations, and the cycle continues indefinitely, in the absence of damping. The conversion is dependent upon a resonance condition being satisfied: the frequencies of the stretching and twisting modes must be very close in value. This is usually achieved by having small adjustable weights mounted on the pendulum. There are several videos of a Wilberforce springs in action on YouTube. For example, see here. The Brief and Tragic Life of Évariste Galois On the morning of 30 May 1832 a young man stood twenty-five paces from his friend. Both men fired, but only one pistol was loaded. Évariste Galois, a twenty year old mathematical genius, fell to the ground. The cause of Galois’s death is veiled in mystery and speculation. Whether both men loved the same woman or had irreconcilable political differences is unclear. But Galois was abandoned, mortally wounded, on the duelling ground at Gentilly, just south of Paris. By noon the next day he was dead [TM169 or search for “Galois” at irishtimes.com]. French postage stamp issued in 1984.
2019-11-19 23:47:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5255448222160339, "perplexity": 1548.9548580676899}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670268.0/warc/CC-MAIN-20191119222644-20191120010644-00384.warc.gz"}
http://www.reference.com/browse/coprime
Related Searches Definitions Nearby Words Coprime In mathematics, the integers a and b are said to be coprime or relatively prime if they have no common factor other than 1 or, equivalently, if their greatest common divisor is 1. The notation a  ⊥  b is sometimes used. For example, 6 and 35 are coprime, but 6 and 27 are not coprime because they are both divisible by 3. The number 1 is coprime to every integer. A fast way to determine whether two numbers are coprime is given by the Euclidean algorithm. Euler's totient function (or Euler's phi function) of a positive integer n is the number of integers between 1 and n which are coprime to n. Properties There are a number of conditions which are equivalent to a and b being coprime: As a consequence, if a and b are coprime and brbs (mod a), then rs (mod a) (because we may "divide by b" when working modulo a). Furthermore, if a and b1 are coprime, and a and b2 are coprime, then a and b1b2 are also coprime (because the product of units is a unit). If a and b are coprime and a divides the product bc, then a divides c. This can be viewed as a generalisation of Euclid's lemma, which states that if p is prime, and p divides a product bc, then either p divides b or p divides c. The two integers a and b are coprime if and only if the point with coordinates (a, b) in a Cartesian coordinate system is "visible" from the origin (0,0), in the sense that there is no point with integer coordinates between the origin and (a, b). (See figure 1.) The probability that two randomly chosen integers are coprime is 6/π2 (see pi), which is about 60%. See below. Two natural numbers a and b are coprime if and only if the numbers 2a − 1 and 2b − 1 are coprime. Cross notation, group If n≥1 is an integer, the numbers coprime to n, taken modulo n, form a group with multiplication as operation; it is written as (Z/nZ)× or Zn*. Generalizations Two ideals A and B in the commutative ring R are called coprime if A + B = R. This generalizes Bézout's identity: with this definition, two principal ideals (a) and (b) in the ring of integers Z are coprime if and only if a and b are coprime. If the ideals A and B of R are coprime, then AB = AB; furthermore, if C is a third ideal such that A contains BC, then A contains C. The Chinese remainder theorem is an important statement about coprime ideals. The concept of being relatively prime can also be extended any finite set of integers S = {a1, a2, .... an} to mean that the greatest common divisor of the elements of the set is 1. If every pair of integers in the set is relatively prime, then the set is called pairwise relatively prime. Every pairwise relatively prime set is relatively prime; however, the converse is not true: {6, 10, 15} is relatively prime, but not pairwise relative prime. (In fact, each pair of integers in the set has a non-trivial common factor.) Probabilities Given two randomly chosen integers $A$ and $B$, it is reasonable to ask how likely it is that $A$ and $B$ are coprime. In this determination, it is convenient to use the characterization that $A$ and $B$ are coprime if and only if no prime number divides both of them (see Fundamental theorem of arithmetic). Intuitively, the probability that any number is divisible by a prime (or any integer), $p$ is $1/p$. Hence the probability that two numbers are both divisible by this prime is $1/p^2$, and the probability that at least one of them is not is $1-1/p^2$. Thus the probability that two numbers are coprime is given by a product over all primes, $prod_p^\left\{infty\right\} left\left(1-frac\left\{1\right\}\left\{p^2\right\}right\right) = left\left(prod_p^\left\{infty\right\} frac\left\{1\right\}\left\{1-p^\left\{-2\right\}\right\} right\right)^\left\{-1\right\} = frac\left\{1\right\}\left\{zeta\left(2\right)\right\} = frac\left\{6\right\}\left\{pi^2\right\}$ ≈ 0.607927102 ≈ 61%. Here ζ refers to the Riemann zeta function, the identity relating the product over primes to ζ(2) is an example of an Euler product, and the evaluation of ζ(2) as π2/6 is the Basel problem, solved by Leonhard Euler in 1735. In general, the probability of $k$ randomly chosen integers being coprime is $1/zeta\left(k\right)$. There is often confusion about what a "randomly chosen integer" is. One way of understanding this is to assume that the integers are chosen randomly between 1 and an integer $N$. Then for each upper bound $N$, there is a probability $P_N$ that two randomly chosen numbers are coprime. This will never be exactly $6/pi^2$, but in the limit as $N to infty$, $P_N to 6/pi^2$.
2013-12-06 06:46:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 19, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9395490884780884, "perplexity": 166.56902916399446}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163049635/warc/CC-MAIN-20131204131729-00039-ip-10-33-133-15.ec2.internal.warc.gz"}
https://mattermodeling.stackexchange.com/questions/3711/how-to-separate-the-data-plot-for-spin-up-and-spin-down-band-structure-into-2-di
# How to separate the data plot for spin up and spin down band structure into 2 different graphs, in Pymatgen? I am a beginner user of the Pymatgen package. In order to process the data from VASP DFT calculation software, I use Pymatgen to visualize the output band structure. When I do spin polarised band calculation, I cannot separate the band structure of spin up and spin down into 2 different subplots or graphs. Can anyone tell me how can I deal with this? Because it looks messy when the package automatically plot it into the same graph. Here I provide an example with my own python scripts to realize your purpose rather than using Pymatgen (You can save data firstly with Pymatgen and plot with python). I assume that you can perform correctly the band and dos calculations with the VASP code. The example I pick up is the monolayer FM NiBr$$_2$$ and the final result is the following: • Thank you for your support. Your code work very well. May I ask one more question. Is it possible to also include the Projection plot in each plot ? I want to do the projection of electron from different type of atom. Ive already include the LORBIT tag in my INCAR file. Nov 12, 2020 at 4:37 • Of course, you can. You can project to atom/orbital. – Jack Nov 12, 2020 at 4:59
2022-08-13 12:30:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5208271741867065, "perplexity": 653.1106784020533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571950.76/warc/CC-MAIN-20220813111851-20220813141851-00074.warc.gz"}
http://www2.math.ethz.ch/education/bachelor/lectures/hs2015/math/chaotically-singular-spacetimes/index.html
## Chaotically Singular Spacetimes Check in the VVZ for a current information. Professor Prof. Dr. Eugene Trubowitz Time and location Mi 8-10, Fr 10-11 in HG F 26.3 ### Content One might have, more provacatively, entitled the course: How does time end (in, Einstein's general relativity)? In a word, badly. Not in a whimper, nor in a crunch, but in something much more exotic. More, technically, what does a generic singular point, restricting time, in solutions to the Einstein gravitational field equations look like? Special cosmological solutions, such as Freedman's, do have singularities. In 1963, Lifshitz and Khalatnikov 'constructed a class' of singular solutions and concluded that '... the presence of a singularity in time is NOT a necessary property of cosmological models of the general theory of relativity, and that the general case of an arbitrary distribution of matter and gravitational field does not lead to the appearance of a singularity.' In 1965 Penrose and Hawking formulated and proved 'incompleteness' theorems that convinced even Lifshitz and Khalatnikov that singularities in time ARE a necessary property of cosmological models of the general theory of relativity. Penrose and Hawking proved, that under very general, physically reasonable conditions, a spacetime (that is, a solution to the Einstein equations) has a light ray (null geodesic) that suddenly ends ('incompleteness') sufficiently far in the past. They adroitly sidestep the problem of defining what a singularity acturally is, by saying it is the 'place' where their light rays end. The proofs of incompleteness theorems are not hard. That's good. Unfortunately, they are by their very nature completely non constructive and provide no quantitative information at all about what a 'singularity' really looks like. In 1970, Belinskii, Khalatnikov and Lifshitz revisited the work of 1963 and found that Khalatnikov and Lifshitz had missed something and that '... we shall show that there exists a general solution which exhibits a physical singularity with respect to time.' In 1982 they revised the 1970 proposal. Their work culminates in a series of fascinating, but very, very heuristic, statements about the possible existence of a class of singular solutions to the field equations. These heuristic statements are referred to as the 'BKL Conjectures'. Next semester, we will rigorously formulate and prove the 'BKL Conjectures' for homogeneous spacetimes. That is, we will construct a set of initial data with positive measure which evolve into homogeneous, chaotically singular spacetimes that exhibit all of the BKL phenomenology. Most importantly, there are chaotic oscillations, growing in magnitude, whose distribution is governed by the continued fraction expansion of a parameter appearing in the initial data. The lectures will be completely self contained. One doesn't need to know anything about general relativity; the Einstein field equations will be introduced from scratch. We will classify real, three dimensional Lie algebras, introduce tensor analysis and discuss the geometry of homogeneneous spacetimes. We will also derive the basic properties of continued fractions and the Gauss map $\displaystyle x \mapsto \frac 1x - \Bigl\lfloor \frac 1x \Bigr\rfloor$ from $(0,1) \smallsetminus$ **Q** to itself. Wichtiger Hinweis: Diese Website wird in älteren Versionen von Netscape ohne graphische Elemente dargestellt. Die Funktionalität der Website ist aber trotzdem gewährleistet. Wenn Sie diese Website regelmässig benutzen, empfehlen wir Ihnen, auf Ihrem Computer einen aktuellen Browser zu installieren. Weitere Informationen finden Sie auf folgender Seite. Important Note: The content in this site is accessible to any browser or Internet device, however, some graphics will display correctly only in the newer versions of Netscape. To get the most out of our site we suggest you upgrade to a newer browser.
2017-11-24 05:40:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.504684567451477, "perplexity": 1853.130830681963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807089.35/warc/CC-MAIN-20171124051000-20171124071000-00576.warc.gz"}
https://tex.stackexchange.com/questions/562899/booktabs-dotrule-as-midrule
# booktabs: \dotrule as \midrule I'd like to have a dotted rule like \midrulefrom the booktabs package. I took code from the booktabs.sty, simplified it and have: \documentclass{article} \usepackage{array} \usepackage{booktabs} \makeatletter{} \def\dotrule{\noalign{\ifnum0=}\fi \@aboverulesep=\aboverulesep \global\@belowrulesep=\belowrulesep \global\@thisruleclass=\@ne \@BTdotted} \def\@BTdotted{% {\CT@arc@\hrule\@height\@thisrulewidth}% \futurenonspacelet\@tempa\@BTendrule} \makeatother{} \begin{document} Text \begin{tabular}{lr}\toprule Huu& Haa \\\dotrule \end{tabular} \end{document} And now I'm stuck replacing the central \hrule\@height\@thisrulewidthwith something that makes not a line, but dots. I've been struggling with \leaders, but didn't get it. Maybe somebody has an idea. I found lots of similar questions, of course. But the trick is to have a command with parameters of the booktabs package! • It will be difficult. \hrule is a primitive which inserts an 'rule' in the vertical list and the computation of the length of the rule is done by TeX very late. On the other side, leaders need a box (and are able to fill that box). But we can't construct a horizontal box of the width of the array during the construction of the array... Sep 16, 2020 at 13:50 • It's possible to do something with \multispan but you will have to give the total number of columns of the array as argument of your command \midrule. We can also compute the width of the array (with PGF/Tikz) and store it in the aux file in order to use it in the next run. Sep 16, 2020 at 13:50 • @F.Pantigny So why did you delete the plain-tex tag? I know that there are solutions, but as you describe your insight in the contruction of \hrule, a replacement with dots seems a real TeX issue. Sep 16, 2020 at 13:58 • plain TeX is a format (that is to say a set of constructions with TeX primitives which is, in some way, pre-compiled). LaTeX is another format. When you use LaTeX, you don't use plain TeX: you use TeX. Sep 16, 2020 at 14:15 ## 2 Answers Here is a command \dotrule which respects the syntax and the parameters of booktabs (aboverulesep, belowrulesep and lightrulewidth) but which is available only in the environment {NiceTabular} of nicematrix. The dotted line is drawn by Tikz (it's possible to change the characteristics of that dotted line with the tools of Tikz). \documentclass{article} \usepackage{nicematrix} \usepackage{booktabs} \usepackage{tikz} \usetikzlibrary{calc} \usepackage{xcolor} \ExplSyntaxOn \makeatletter \cs_set:Npn \dotrule { \noalign \bgroup \peek_meaning:NTF [ { \__dose_dotrule: } { \__dose_dotrule: [ \lightrulewidth ] } } \cs_set:Npn \__dose_dotrule: [ #1 ] { \skip_vertical:n { \aboverulesep + \belowrulesep + #1 } \egroup \tl_gput_right:Nx \g_nicematrix_code_after_tl { \__dose_dotrule:nn { \int_use:N \c@iRow } { #1 } } } \cs_new_protected:Nn \__dose_dotrule:nn { { \dim_set:Nn \l_tmpa_dim { \aboverulesep + ( #2 ) / 2 } \CT@arc@ \tikz \draw [ dotted , line~width = #2 ] ([yshift=-\l_tmpa_dim]#1-|1) -- ([yshift=-\l_tmpa_dim]#1-| \int_eval:n { \c@jCol + 1 }) ; } } \makeatother \ExplSyntaxOff \begin{document} \begin{NiceTabular}{cc} \toprule Header 1 & Header 2 \\ \dotrule text & text \\ some text & other text \\ \bottomrule \end{NiceTabular} % \hspace{2cm} % \begin{NiceTabular}{cc} \toprule Header 1 & Header 2 \\ \midrule text & text \\ some text & other text \\ \bottomrule \end{NiceTabular} \vspace{1cm} \arrayrulecolor{blue} \begin{NiceTabular}{cc} \toprule Header 1 & Header 2 \\ \dotrule[3pt]% <-- mandatory text & text \\ some text & other text \\ \bottomrule \end{NiceTabular} % \hspace{2cm} % \begin{NiceTabular}{cc} \toprule Header 1 & Header 2 \\ \midrule[3pt] text & text \\ some text & other text \\ \bottomrule \end{NiceTabular} \end{document} • Instead of improving booktabs you basically improved nicematrix: with your patch it will recognize the booktabs commands with a new feature: dotted lines. I'm skimming through the manual of nicematrix and that package looks really great; booktabs has even been implemented yet! So I'll have to teach the commands of nicematrix to Emacs. I often use datatool and numprint, so maybe I'll come back with a question on that later. However, thank you very much for (a) the nice package and (b) this answer! Sep 21, 2020 at 9:38 • The command \dotrule[3pt] needs a % sign at the end, otherwise the next row in the tabular will begin with a space. Write: \dotrule[1pt]% or whatever value is inside the bracket. Sep 30, 2020 at 10:03 • You are right. I have added a % in the code. Sep 30, 2020 at 11:30 An easy solution with booktabs environment of tabularray package: \documentclass{article} \usepackage{tabularray} \UseTblrLibrary{booktabs} \begin{document} \begin{booktabs}{lll} \toprule Alpha & Beta & Gamma \\ \midrule[dashed] Epsilon & Zeta & Eta \\ \midrule[dotted] Iota & Kappa & Lambda \\ \bottomrule \end{booktabs} \end{document} `
2022-05-22 20:09:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9437903165817261, "perplexity": 3548.5554664802085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662546071.13/warc/CC-MAIN-20220522190453-20220522220453-00633.warc.gz"}
https://gsebsolutions.in/gseb-solutions-class-11-statistics-chapter-2-ex-2/
# GSEB Solutions Class 11 Statistics Chapter 2 Presentation of Data Ex 2 Gujarat Board Statistics Class 11 GSEB Solutions Chapter 2 Presentation of Data Ex 2 Textbook Exercise Questions and Answers. ## Gujarat Board Textbook Solutions Class 11 Statistics Chapter 2 Presentation of Data Ex 2 Section – A Choose the correct option from those given below each question: Question 1. Which of the following variables is discrete? (a) Height of a person (b) Weight of a commodity (c) Area of a ground (d) Number of children per family (d) Number of children per family Question 2. Which of the following variables is continuous? (a) Number of errors per page of a book (b) Number of cars produced (c) Number of accidents on road (d) Monthly income of a person (d) Monthly income of a person Question 3. Name the method of classification of raw data related to daily demand of a product. (a) Classification of attribute data (b) Classification of numeric data (c) Raw distribution (d) Manifold classification (b) Classification of numeric data Question 4. Name the type of classification of the data related to the occupation and education of a person living in a certain region. (a) Tabulation (b) Classification of numeric data (c) Raw distribution (d) Discrete frequency distribution (a) Tabulation Question 5. In continuous frequency distribution, what is the class length of a class? (a) Average of two successive lower boundary points. (b) Average of class limits. (c) Difference between upper boundary point and lower boundary point of that class. (d) Average of upper boundary point and lower boundary point of the class. (c) Difference between upper boundary point and lower boundary point of that class. Question 6. Range of an ungrouped data is 55 and it is divided into 6 classes. Then, what is the class length? (a) 10 (b) 9 (c) 9.17 (d) 10.17 (a) 10 Question 7. Inclusive classes for a distribution are 10- 19.5, 20-29.5, 30-39.5. What are the exclusive class limits for the second class? (a) 19.5 – 29.5 (b) 19.75 – 29.75 (c) 20 – 30 (d) 19-29 (b) 19.75 – 29.75 Question 8. A discrete variable has values 0, 1, 2, 3, 4 with the respective frequency 2, 4, 6, 8, 14. What is the value of ‘more than’ type cumulative frequency when the value of variable is 2? (a) 28 (b) 12 (c) 34 (d) 6 (a) 28 Question 9. A continuous distribution has classes 0-9, 10- 19, 20-29, 30-39 with the respective frequencies 10, 20, 40, 10. What is the less than type cumulative frequency for the boundary point 29.5? (a) 30 (b) 50 (c) 70 (d) 80 (d) 80 Question 10. For a continuous variable, classes are 1 – 1.95; 2-2.95; 3-3.95; 4-4.95; 5-5.95, then what is the lower boundary point of the second class? (a) 1.995 (b) 2 (c) 2.975 (d) 1.975 (d) 1.975 Question 11. Which of the following statements is /are true? Statement 1: A method of representing the large and complex data in simple and attractive manner is called diagram. Statement 2: Self-explanatory representation of main characteristics of the data is called diagram. Statement 3: Representation of comparative study of data is called diagram, (a) Only statement 1 is true. (b) Only statements 1 and 2 are true. (c) Statements 1, 2 and 3 are true. (d) All three statements are false. (c) Statements 1, 2 and 3 are true. Question 12. The class intervals for a continuous variable are 0-99, 100-199, 200-299, 300-399, 400-499. What is the mid-value of the second s class? (a) 149.5 (b) 150 (c) 199.5 (d) 99.5 (a) 149.5 Question 13. What do we call a table that shows designation, gender and marital status of employees of a company? (a) Simple classification (b) Classification of numeric data (c) Manifold classification (d) Simple table (c) Manifold classification Question 14. Which of the following diagrams is used to represent sub-data of classified information? (a) Bar diagram (b) Divided bar diagram (c) Multiple bar diagram (d) Pictogram (b) Divided bar diagram Question 15. Which of the following diagrams is used for comparing the sub-data of the classified data? (a) Pictogram (b) Pie chart (c) Bar diagram (d) Divided bar diagram (b) Pie chart Section – B Answer the following question in one sentence : Question 1. Define discrete variable. If a variable can assume definite or countable values within the specified range is called discrete variable. Question 2. Define continuous variable. If a variable can assume any value within the specified range is called continuous variable. Question 3. What is classification? A process of arranging ungrouped or raw data in systematic and short form is called classification. Question 4. State the types of classification. Classification is of two types: 1. Quantitative classification and 2. Qualitative classification. Question 5. Define the frequency of observation. A numeric value showing the repetition of value of observation is called frequency of that observation. It is denoted by symbol Question 6. State the method to determine number of classes on the basis of range of data and class length. When the rahge of data and class length are given, number of classes is determined by the following formula: Number of classes = $$\frac{\text { range }}{\text { class length }}$$ Question 7. When should one form a frequency distribution with unequal class lengths? When the range of raw or ungrouped data is very large, one should form a frequency distribution with unequal class length. Question 8. Define cumulative frequency. In a frequency distribution the sum of frequency upto the value of an observation or a class is called the cumulative frequency of that value of the observation or that class. It is denoted by symbol ‘cj\ Question 9. Define ‘less than’ type cumulative frequency distribution for discrete data. A table showing ‘less than’ cumulative frequency corresponding to the various values of discrete data is called less than” type cumulative frequency distribution for discrete data. Question 10. Define ‘more than’ type cumulative frequency distribution for continuous data. A table showing ‘more than’ cumulative frequency corresponding to the lower boundary point of various classes is called ‘more than’ type cumulative frequency distribution for continuous data. Question 11. Write a formula for finding mid-value of a class. A formula for finding mid-value of a class is as follows : Mid Value of a class = $$\frac{\text { upper limit of class + lower limit of class }}{2}$$ Question 12. Define tabulation. Tabulation is a process of arranging the qualitative data In a systematic manner into rows and columns. Question 13. Define manifold classification, A classification of raw data on the basis of more than one attribute is called manifold classification. Question 14. What is the characteristic of the best table to represent qualitative data? The characteristic of the best table to represent qualitative data is ‘the table satisfies the objective of classification’. Question 15. What is the mam disadvantage of the classification of data? The main disadvantage of classification of data is that the basic form of the individual unit of the data is changed. Question 16. In statistical study, what is the main objective of a diagram? In statistical study, the main objective of a diagram is to represent the huge and complex data into simple, attractive and concise form. Question 17. State the types of diagrams. The types of diagrams are three : 1. One dimensional diagram, 2. Two-dimensional diagram and 3. Pictogram. Question 18. For which type of data, multiple bar diagram is drawn? When the data about different places; things or times are given on more than one mutually related characteristics, multiple bar diagram is drawn. Question 19. When do we draw divided bar diagram? When the data about different places, things or times consist of several mutually related sub-data on different components are given, divided bar diagram is drawn. Question 20. State the main, objective of the percentage divided bar diagram. The main objective of percentage divided bar diagram is to compare effectively the mutually related sub-data. Section – C Answer the following questions as required: Question 1. Define quantitative and qualitative data. • Quantitative data: The data collected on numeric variable – discrete or continuous – are called quantitative data. • Qualitative data: The data collected on the basis of qualitative variable or attribute are called qualitative data. Question 2. Define discrete frequency distribution with illustration. A table showing the frequency corresponding to various values of discrete variable is called discrete frequency distribution. Discrete frequency distribution showing the number of children in 100 families Question 3. Define continuous frequency distribution with illustration. A table showing the frequency corresponding to various classes of continuous variable is called continuous frequency distribution. It is prepared, when the range of data is very large. Continuous frequency distribution showing the height of 50 students of Std. XI Question 4. Explain the definition of inclusive continuous frequency distribution. If the upper limit of a class and lower limit of succeeding class are not equal and the upper limit of a class is included in that class, then such classes are called inclusive type of classes. A continuous frequency distribution consists of inclusive type classes is called inclusive continuous frequency distribution. Explanation: The frequency distribution consists of classes 10- 19, 20-29, 30-39… is inclusive continuous frequency distribution. Here, the upper limit 29 of class 20 – 29 and the lower limit 30 of its succeeding class 30 – 39 are not equal. Also upper limit 29 is included in the class 20 – 29. Question 6. Write formulae for obtaining class boundary points from inclusive class limits. The formulae to find the class boundary points for the class limits of inclusive continuous frequency distribution are as follows : Question 7. Find the mid values of each class of the following frequency distribution: Question 8. For the frequency distribution given in the above problem, find the class length of each class. The given frequency distribution is inclusive continuous frequency distribution. Converting it into exclusive form, we determine class length for each class. The difference between upper limit of a class and the lower limit of immediate following class is 1. Therefore we subtract $$\frac{1}{2}$$ = 0.5 from the lower limit of a class and add it to the upper limit of the class. Question 9. Prepare ‘less than’ type cumulative frequency distribution from the following: Observation 10 20 30 40 50 Frequency 10 30 30 20 10 Question 10. Demand of a certain item is classified as good, moderate and weak. On the basis of a study for entire year, it is known that the demand was moderate during 22 weeks, whereas the demand was weak during 18 weeks. Present this information in a table. We take 52 weeks a year. [Note: The figure shown in bold, are obtained by simple calculations.] Question 11. Complete the following table: [Note: The figures shown in bold are obtained by simple calculation. Question 12. Differentiate between inclusive and exclusive continuous frequency distribution. Inclusive Continuous Frequency Distribution Exclusive Continuous Frequency Distribution 1. It is carried out for the discrete raw data having 1. It is carried out for the continuous raw data, large value of range. 2. The upper limit of each class and the lower limit of its succeeding class are not equal. 2. The upper limit of each class and the lower limit of its succeeding class are equal. 3. The upper limit of a class is included in that class. For example, 20-24, 25-29, 30-34, …, etc. Here, upper limit 24 is included in the class 20-24. 3. The upper limit of a class is excluded from that class. For example, 20-25, 25-30, 30-35, …, etc. Here, upper limit 25 is excluded from the class 20-25 and is included it its succeeding class 25-30. 4. Class limits and class boundary points are not same. 4. Class limits are the class boundary points. Question 13. State the limitations of diagram. The limitations of diagrams are as follows : • Lack of accuracy in drawing diagrams leads to wrong interpretation. • Illusionary effect of diagrams misleads the public opinion. • There is a loss of accuracy in presenting the data by diagrams. Question 14. What are one dimensional diagrams? State their names. A diagram drawn by considering only one characteristic of the clata is called one dimensional diagram. There are four types of one dimensional diagram: 1. Bar diagram 2. Multiple or adjacent bar diagram 3. Simple divided bar diagram and 4. Percentage divided bar diagram. Question 15. Explain two dimensional diagrams in brief. When the volume of data is large, then such data is presented on a diagram considering both length and breadth. Such diagrams are called two dimensional diagrams. In two dimensional diagrams total value = Area of diagram Square, rectangle, circle, pie or sectorial diagrams are two dimensional diagrams. Question 16. Represent the following data through a bar diagram: Simple bar diagram showing the production for different years Simple bar diagram showing the number of students In different faculties Section – D Answer the following questions as required : Question 1. What is the necessity of classification in statistical study? The necessity of classification in the statistical study is due to following reasons : • To represent large data into simple, short and attractive manner. • For easy comparison between the various characteristics of the data. • To save, time, money and labour for the analysis of data. • To obtain information easily regarding various characteristics of the data under study. Question 2. Explain the classification of numeric data with an appropriate illustration. The classification of numeric data is called numerical classification or quantitative classification. Illustration: If the data are collected on ‘printing mistakes per page’ of a book of 50 pages, then 50 observations on ‘printing mistakes per page’ are obtained. Studying this ungrouped data it is observed that there are 10 pages having 0 printing mistake, 18 pages having 1 printing mistake, 10 pages having 2 printing mistakes and 12 pages having 2 printing mistakes. Thus, the process of dividing the data in this manner is called numerical classification, which can be presented in tabular form as follows : Here, ‘Number of printing mistakes per page’ is numeric variable. Hence, classification mentioned above is called numerical classificaiton. • The frequency table is formed by numerical classification of the data. • Numeric variable may be discrete or continuous. Therefore, we get discrete frequency table or continuous frequency table by numerical classification of the data. Question 3. Explain the classification of qualitative data with a suitable illustration. The classification of the data on qualitative variable or an attribute is called qualitative classification. Illustration: Selecting 50 flowers from a garden, the information on colours of flowers are obtained. It is observed from the data that 12 flowers are of white colour. 10 flowers are of yellow colour, 11 flowers are of red colour, 12 flowers are of pink colour and 15 flowers are of blue colour. This data can be presented in short by a table as follows : Here, ‘colour of flower’ is qualitative variable. Therefore, such classification is called the classification of qualitative data. Question 4. Write a short note on ‘cumulative frequency distribution’. Sum of the frequencies up to the value of an observation or class is called cumulative frequency (cf) and its distribution is called cumulative frequency distribution. Two types of cumulative frequency distribution are : 1. ‘Less than’ type cumulative frequency distribution: The sum of the frequencies upto the specified value of the observation or specified upper boundary point of a class is called ‘less than’ type cumulative frequency of that specified value or class and the distribution is called ‘less than’ type cumulative frequency distribution. Here, cumulative frequencies are in ascending order. 2. ‘More than’ type cumulative frequency distribution: The sum of all the frequencies of the specified value of the observation or the lower boundary point of the specified class and the values or classes succeeding to it, is called ‘more than’ type cumulative frequency and its distribution is called ‘more than’ type cumulative frequency distribution. Here, cumulative frequencies are in descending order. Question 5. Discuss the points for constructing continuous frequency distribution. For constructing continuous frequency distribution the following points are to be considered : • When the variable of the data is continuous or the range of variable is large, continuous frequency distribution should be constructed. • Generally, the number of classes should be any number from 6 to 15. Under special circumstances, the number of classes may be less than 6 or more than 15. • Considering the number of classes k and the range of data R. class length is decided using the following formula: c = $$\frac{R}{K}$$ • The value of c is selected such that ck ≥ R and c is positive integer. • Generally, the class length of each class is kept equal. But under the circumstances that range of the data is large, keeping in view the number of classes, classes of different lengths can be constructed. • Usually, the initial class should begin with the observation multiple of class length and a number smaller than and close to the lowest observation of the data. • Classes can be chosen either inclusive type such as 10-14, 15-19, 20-24, … or exclusive type such as 10- 15, 15-20, 20-25 • For data on continuous variable exclusive type of classes should be preferred and when the range of discrete data is large one, then inclusive type of classes should be preferred. Question 6. State the guiding rules for the construction of a table. In order to make the information more meaningful and to derive significant decisions easily from the table, some guiding rules for the construction of a good table are as follows : Guiding rules for Tabulation: • Appropriate title should be given. • There should be clear and simple captions to the rows and column. • Size of the table should be proportionate to the space available. • The interrelated information should be placed adjacent to each other. • Large numbers should be represented in hundred, thousands, lakhs or crores. • Separate lines should be drawn to distinguish the main characteristics of the data. • Provision for indicating the totals of primary and subsidiary characteristics should be there in a table. • Large volume of data should be represented in different tables instead of a single table. • Source of the data must be mentioned at the end of the table. • Before preparing the final table, a rough table should be prepared. Question 7. State the uses of tabulation. Tabulation is a process of systematic arrangement of qualitative or attribute raw data into rows and / or columns. Its uses are as follows : Uses of Tabulation : • Represents the extensive data in simple, organised and precise manner. • Required information can be obtained easily. • Various characteristics to be compared are placed side by side. Hence comparison becomes easy. • Row and/or column totals are found, hence errors can be rectified easily. • Unnecessary information is removed, hence the time, money and labour required for the study of data is saved. • The analysis of the data becomes simple and convenient. Question 8. Obtain the original frequency distribution from the following data: The difference between two successive mid values is 100. Therefore class length c = 100 and using the formulae Lower limit = Mid value – $$\frac{c}{2}$$ Upper limit = Mid value – $$\frac{c}{2}$$ we obtain the class limits for each mid value. We get the original frequency distribution as follows : Question 9. Out of 40 persons working in an office, 60 % are females and the remaining 40 % are males. 50 % of males are married, whereas the ratio of married and unmarried females is 5:3. Present this information in a table. Table showing the 40 employees of an office according to their sex and marital status Sex Marital status Total Married Unmarried Male 8 8 16 Female 15 9 24 Total 23 17 40 [Explanation : No. of females = 40 × $$\frac{60}{100}$$ = 24. Therefore, No. of males = 40 – 24 = 16 No. of married males = 16 × $$\frac{50}{100}$$ Therefore, the number of unmarried males = 16 – 8 = 8 No. of married females = $$\frac{5}{8}$$ × 24 = 15; Therefore, the number of unmarried females = $$\frac{3}{8}$$ × 24 = 9] Question 10. Information regarding the monthly income of 100 workers in given below. Obtain original frequency distribution from it. The difference between two successive upper boundary points is 500. Therefore, class length is c = 500. The original frequency distribution is obtained as follows : Question 11. Marks of 200 students in an examination are as under. Obtain the original frequency distribution. Given frequency distribution is ’more than’ type cumulative frequency distribution. The difference between two successive lower boundary points is 10. Therefore, class length c = 10. The original frequency distribution is obtained as follows : Question 12. From the data given below, obtain original frequency distribution: The difference between two successive mid values is 5. Therefore class length c = 5. The original frequency distribution is obtained as follows : Question 13. There are 1000 buses used for the public transport in Ahmedabad city. Of them, 350 are used as BRTS and remaining as AMTS. Out of total 400 air conditioned buses, 250 were used as BRTS. Present this information in a suitable table. Table showing the number of persons in Ahmedabad city according to mode of transportation and type of bus Mode of transportation Type of bus Total AC Non-AC BRTS 250 100 350 AMTS 150 500 650 Total 400 600 1000 Question 14. Out of 1500 students of a college, 900 were boys and of them, 250 were in science stream, girls were in commerce stream. Present these data in an appropriate table. Table showing the number of college students according to their faculties and sex Faculties Sex Total Boys Girls Science Stream 250 350 600 Commerce Stream 650 250 900 Total 900 600 1500 Question 15. Explain the importance of diagrams in statistical study. The objective of diagram is to present data in simple and interesting manner. It provides an impressive medium to present the data in attractive manner. Its importance in statistical study becomes more obvious by below mentioned usages. 1. Attractive presentation: The presentation of statistical data by diagrams are so attractive that the characteristics of the data can be remembered for long time by the person who studies it. 2. Clear presentation: The data which is difficult to understand by its elaboration can be explained easily by diagrams. 3. Simple presentation: The complex data can be easily explained by diagrams. 4. Easy to compare: Two or more data can be compared easily by diagrams. 5. Helpful to children and illeterates: The presentation of data by diagrams are very useful for illeterates, less educated and children. The message of data can be understood without considering the1 figures of data. Diagram is a unique device to educate the children. 6. Useful for business and industries: The traders and manufacturers can advertise their products effectively with the use of attractive diagrams. 7. Useful in social sciences: The diagram becomes mendatory to focus some important aspects in the sciences like Psychology, Economics and Sociology. 8. Helpful in social reforms: The diagrams are more effective to create desired impression on the minds of people by different campaigns, to educate different classes of society for removal of social vices and implanting social reforms. 9. Concise presentation: Large volume of statistical data can be presented promptly and in concise form by diagrams. 10. Uniform interpretation: The data represented by diagrams can be easily understood irrespective of language barrier. Question 16. Write a short note on one-dimensional diagrams. A diagram drawn by considering only one characteristic of the data is called one-dimensional diagram. The types of one dimensional diagrams are four : 1. Bar diagram: It is used to represent the data on different places, things or times. To draw bar diagram, the different places, things or times are taken on X-axis and the measure of respective places, things or time on Y-axis with appropriate scale. Bars with equal width at equal distance are drawn with the height proportional to the measure. The diagram formed in such manner is called bar diagram. In bar diagram the logical order of bars should be maintained. To make comparative study easy, the bars showing places or things should be arranged in proper order on the graph but the bars showing times are presented as it is on graph paper. 2. Multiple or adjacent bar diagram: When the data about different places, things or times are collected on more than one mutually related characteristics, then multiple bar diagram is drawn by placing the related bars close to each other. If the data given are related to time, then bars are drawn in order of, time but when the data are not related to time then they are arranged in ascending or descending order of any one of the characteristics of the data. 3. Simple divided bar diagram: If the data on different places, things or times consist of several mutually related sub-data on different components are given then simple divided bar diagram is drawn. In this diagram first of all a bar of proper width and height proportional to the total value of the data is drawn. Then it is divided in accordance with the sub-data into different segments by various signs. 4. Percentage divided bar diagram: In simple divided bar diagram the mutually related sub-data cannot be effectively compared. To overcome this difficulty, percentage divided bar diagram is drawn. In this diagram taking the total value of data as 100%, the percentages of sub-data are calculated. A bar of appropriate width and height proportional to 100% is drawn and is divided in accordance with the percentage of sub-data. By such diagram the mutually related sub-data can be effectively compared but the total value cannot be compared. Question 17. Write a short note on two dimensional diagrams. When the volume of the data is large then considering both length and breadth, a diagram drawn is called two dimensional diagram. • In two dimensional diagrams the total value is shown as an area. • Square, rectangle, circle, pie diagram are two dimensional diagrams. Circle diagram: When the volume of the data regarding two or more places, things or times is large, then circle diagrams are drawn for such data. • In circle diagram, square roots of the volume of different data are taken as the radius of circle. Arranging the radius in ascending or descending order, circles are drawns with centers on the same line at equal distance from each other. When the data on time are given then circles are drawn in order of time only. • If the square root of the volume is too large, then divide by a constant and if it is too small then multiply by a constant. In this manner determine the radius. Pie diagram: If the data on different places, things or times consist of several mutually related sub-data are numerically large, pie diagram is drawn. • In this diagram, the total volume of the data is represented by a circle of suitable radius and this circle is divided into sectors in accordance with sub-data. • Here, total volume of the data is taken as 360° and the volume of sub-data are expressed in terms of measures of angle and are presented on circle with respective circular sectors. Question 18. Explain pictogram with an illustration. A diagram in which the data are represented by selecting an appropriate picture is called pictogram. • Illustration: If the data related to the yields of wheat are given, then it can be shown by symbols of ear of wheat. The pictures of ear of wheat or drawn in proportion to the amount of given data. • Pictogram is such a diagram that it draws quick attention of the viewer. • By pictograms the data can be easily understood by illiterates, less educated people and by children. • Pictogram has no barrier of language, hence the data can be easily understood by people of any region. Question 19. The agricultural production index numbers for two different states are as under. Present them using suitable diagram We draw multiple bar diagram by keeping the bars of Index number of agriculture production for States A and B adjoining to each other. The years are shown on X-axis and the index numbers of agriculture production are shown on Y-axis. Multiple bar diagram showing the index numbers of agriculture production of two states A and B for different years Question 20. Area (in sq. mt.) of 5 different regions is as under. Draw a pie diagram: Taking total area (5 ÷ 8 + 29 + 44 + 71 =) 157 = 3600. we prepare the following table for the calculation of degrees for the areas of different regions. Pie diagram showing the areas of different regions Question 21. Production of a commodity in three different factories is as under. Present it through suitable diagram. Factory P 9 R Production (lakh ₹) 256 576 1024 Given data is numerically large. Therefore it is represented in circle diagram. To determine the radius for the production of different factories, we prepare the following table : Factory Production (in lakh ₹) Square root Radius = $$\frac{\text { square root }}{16}$$ P 256 16 1 cm 9 576 24 1.5 cm R 1024 32 2 cm Section – E Solve the following: Question 1. Number of mangoes received from different trees of mangoes in a farm during a season of 30 days is as under. Prepare a frequency distribution by taking class length 5. In the given data minimum number of mangoes = 92 and maximum number of mangoes = 128 and class length = 5. The number of mangoes is a discrete variable and its range is (128-92 =) 36. Therefore, we prepare inclusive continuous frequency distribution. The initial class that includes 92 mangoes will be 90 – 94 and the last class includes 128 mangoes will be 125- 129. The frequency distribution is obtained as follows : Inclusive continuous frequency distribution of mangoes received on mango trees during 30 days Question 2. The data regarding the earnings (₹) of 40 rickshaw drivers during a certain day are as follows. Prepare a frequency distribution having one class as 220 – 239 and class length 20. In the given data minimum daily earning of rickshaw driver = ₹ 200 and maximum daily earning = ₹ 356. Class length = 20 and given class is 220 – 239. Therefore, the initial class that includes the minimum daily earning will be 220-219 and the last class the includes the maximum daily earning will be 340-359. The frequency distribution is obtained as follows : Inclusive continuous frequency distribution of daily earning of 40 rickshaw drivers Question 3. Information on monthly water consumption (in units) of 50 residents of a region is as under. By taking one of the classes as 25 – 30, prepare exclusive continuous frequency distribution. In the given data the minimum consumption of water of a residence during a month = 24 units and maximum consumption = 57 units. Given class is 25 – 30. Therefore, the initial class that includes the minimum consumption of 24 units is 20 – 25 and the last class the includes the maximum consumption of 57 units is 55 – 60. The frequency distribution is obtained as follows : Exclusive continuous frequency distribution of monthly consumption of water of 50 residents of a area of a city Question 4. The data obtained by inquiring price of an item at 50 different shops are as under. Prepare a frequency distribution having the last class 85-90. In the given data the minimum weight of an employee = 62 kg. The last class given of frequency distribution is 85 – 90. Therefore the initial class that includes the minimum weight 62 kg, we get 60-65. The frequency distribution is obtained as follows: Exclusive continuous frequency distribution of weights (in kg) of 50 employees working in a company Question 5. Obtain ‘less than’ type and ‘more than’ type cumulative frequency distribution from the following frequency distribution : Question 6. The following data refer to the daily absence of workers in a factory during 30 days. Prepare an appropriate frequency distribution and hence obtain ‘less than’ type cumulative frequency distribution. In the given data ‘Number of absences’ is a discrete variable. Therefore an appropriate frequency distribution is discrete frequency distribution. Minimum absence is 0 and maximum absence is 6. Therefore the frequency distribution is obtained as follows : Discrete frequency distribution of absent workers of a factory during 30 days Question 7. There were 850 students studying in higher standards of a school. The number of students in standard 10, 11 and 12 were in the proportion 8:5:4. In standard 10, the number of boys is 30% of the number of students in the school. In standard 11, the numbers of boys and girls are equal. In standard 12, the number of boys is three times the number of girls. Present the above data in a tabular form. In the given data two attributes are : 1. Standard : 10, 11, 12 2. Sex: Boys, Girls According to these two attributes table is prepared as follows : Table showing the number of students of a school according to their standard and sex Standard Sex Total number of students Boys Girls 10 255 145 400 11 125 125 250 12 150 50 200 Total 530 320 850 [Explanation: No. of students in Std. 10 = $$\frac{8}{17}$$ × 850 = 400 No. of boys in Std. 10 = 850 × $$\frac{30}{100}$$ = 255 ∴ No. of girls = (400 – 255 =) 145 No. of students in Std. 11 = $$\frac{15}{17}$$ × 850 = 250 No. of boys = No. of girls = $$\frac{250}{2}$$ = 125 No. of students in Std. 12 = $$\frac{4}{17}$$ × 850 = 200 No. of boys = $$\frac{3}{4}$$ × 200 = 150 and number of girls = $$\frac{1}{4}$$ × 200 = 500] Question 8. In the year 2013, there were 1200 students studying in a school and of them, 400 were girls. 50 girls were not residing in hostel. In all 600 boys were residing in hostel. In the year 2014, there is an increase of 20% in the number of boys and the number of girls increased by 30 %. During this year, 260 boys and 100 girls were not residing in hostel. In the year 2015, 140 boys and 100 girls were newly admitted in the school and all of them resided with the hostel students. Present above data in a tabular form. In the given data three attributes are : 1. Year: 2013, 2014, 2015 2. Residence: Hostel, Not in hostel 3. Sex: Boys, Girls According to these three attributes, table is prepared as follows : Table showing the number of students of a school according to their residence and sex during the years 2013 to 2015 [Explanation: 2013: No. of boys = 1200 – 400 = 800 No. of girls residing in hostel = 400 – 50 = 350 No. of students residing in hostel = 600 + 350 = 950 ∴ No. of students not residing in hostel = 1200 – 950 = 250 20 2014: No. of boys = 800 × $$\frac{20}{100}$$ + 800 = 960 No. of girls = 400 × $$\frac{30}{100}$$ + 400 = 520 No. of students residing in hostel = 1480 – 360= 1120 2015: No. of boys = 960 + 140 = 1100 No. of girls = 520 + 100 = 620 No. of boys residing in hostel = 700 + 140 = 840 No. of girls residing in hostel = 420 + 100 = 520] Question 9. Present the following data in an appropriate tabular form. A bank receives 2000 applications as a response to the job advertisement. Of the total applicants, 50% were graduates, 40% were post graduates and remaining 10% have professional degree. Among the graduates, 60% were males and of them, 25% were married. 40% female graduates were married. Among the post graduates, 60% were males and 40% of them were married. Among post graduate females, 50% were married. 30% of the females had professional degree and of them, 60% were married. The number of married and unmarried males having professional degree was equal. In the given data three attributes are : 2. Sex: Male, Female 3. Marital status: Married, Unmarried According to these three attribute, table Is prepared as follows: Table showing the number of candidates for service in a bank according to their study, sex and marital status [Explanation: Graduate : Total No. of candidates = 2000 × $$\frac{50}{100}$$ = 1000 No. of graduate male candidates = 1000 × $$\frac{60}{100}$$ = 600 No. of graduate married male candidates = 600 × $$\frac{25}{100}$$ = 150 No. of graduate married female candidates = 400 × $$\frac{40}{100}$$ = 160 Post-graduate: Total No. of candidates = 2000 × $$\frac{40}{100}$$ = 800 No. of post-graduate male candidates = 800 × $$\frac{60}{100}$$ = 480 No. of post-graduate married male candidates = 480 × $$\frac{40}{100}$$ = 192 No. of post-graduate married female candidates = 320 × $$\frac{50}{100}$$ = 160 Professional : Total No. of candidates = 2000 × $$\frac{10}{100}$$ = 200 No. of professional female candidates = 200 × $$\frac{30}{100}$$ = 60 No. of professional married female candidates = 60 × $$\frac{60}{100}$$ = 36 No. of professional married and unmarried male candidates each = $$\frac{140}{2}$$ = 701 Question 10. The following table represents the number of workers of a factory according to their gender, residence and year: Answer the following questions using the above table: (1) What is the percentage increase in the total number of workers during the period of five years? Total No. of workers in the year 2010 = 2000 Total No. of workers in the year 2015 = 3000 ∴ The increase in the total number of workers during 5 years period = (3000 – 2000 =) 1000 workers. ∴ Percentage increase in the total number of workers during 5 years period = $$\frac{1000}{2000}$$ × 100 = 50 % (2) Find the percentage decline in the number of non-local workers in the year 2015. No. of non-local workers in the year 2010 = 500 No. of non-local workers in the year 2015 = 400 ∴ The percentage decrease in non-local workers in the year 2015 = $$\frac{(500-400)}{500}$$ × 100 = $$\frac{100}{500}$$ × 100 = 20% (3) Find the percentage increase in the number of men and women during the period of 5 years. The percentage increase in number of male workers during five years = $$\frac{(2300-1500)}{1500}$$ × 100 $$\frac{800}{1500}$$ × 100 = 53.33 % The percentage increase in number of female workers during five years = $$\frac{(700-500)}{500}$$ × 100 $$\frac{200}{500}$$ × 100 = 40 % Question 11. A mobile phone manufacturing company produces and sells two types of mobile phones. The particulars about it are given in the following table. Present it by a suitable diagram. Particulars Mobile A Mobile B Raw material 5000 6000 Assembly expense 3000 3000 Other expense 4000 4500 Total expense 12000 13500 Selling price 13000 15000 In the given data, details of production cost and sales of two types mobiles of a company are given. The expenditures of the different sections related to production are also given. Therefore considering the selling cost we draw simple divided bar diagram. For that we prepare the following table: Drawing a bar proportional to sales, according production cost of different sections we divide the bar. The simple divided bar diagram is prepared as in figure 16. Question 12. Information regarding the average monthly expenses (In ₹) of two familles is as under. Present It through a pie diagram. Particulars Family A Family B Food 20000 16000 Fuel 5000 4000 Transportation 10000 8800 House rent 15000 18000 Other 22000 18000 We prepare the following table showing the calculations of degrees for different details of two families A and B: Taking an appropriate radius circles are drawn and are divided according to the degree of details. We prepare pie diagrams as follows : Radius for family A = $$\frac{\sqrt{72000}}{100}=\frac{268.33}{100}$$ = 2.68 = 3 cm Radius for family B = $$\frac{\sqrt{64800}}{100}=\frac{254.56}{100}$$ = 2.55 = 2.5 cm Pie diagram showing the average monthly expenditure of two families Section – F Solve the following: Question 1. A sample of 25 lenses is selected from a day’s production of a company manufacturing eye lenses. The thicknesses (in millimeter) of these selected lenses are as under. Distribute these data into five classes of equal length. If the company decides that the lenses having thicknesses less than 1.510 and more than 1.525 are considered as defective then what per cent of lenses in the sample are defective? In the given data, minimum thickness of lens = 1.505 mm and maximum thickness of lens = 1.528 mm. ∴Range = 1.528 – 1.505 = 0.023 In 5 classes the data is to be distributed. ∴ Class length = $$\frac{0.023}{5}$$ = 0.0046 ≈ 0.005 Therefore the initial class that includes minimum value 1.505 is 1.505 to 1.510 and the last class that includes maximum value 1.528 is 1.525 to 1.530. The frequency distribution is obtained as follows : Exclusive continuous frequency distribution of thickness of eye lenses3 In the above frequency distribution the number of lenses of thickness less than 1.510 mm = 5 and the number of lenses of thickness equal to or more than 1.525 mm is 4. Therefore total (5 + 4 = ) 9 lenses are defective. ∴ The percentage of defective lenses in the sample = $$\frac{9}{25}$$ × 100 = 36% Question 2. The data related to variations in the price of a share for 30 days In a share market are as under. Prepare an exclusive continuous classification having class limits of one of the classes as 18.5 – 20.5. On the basis of this frequency distribution, answer the following questions: (1) What is mid value of the 4th class? (2) Find the number of days during which the price of share is at the most ₹ 16.50. (3 ) Find the number of days during which the price of share is at least ₹ 19.50. In the given data, minimum closing price of share is ₹ 10.50 and maximum price is ₹ 20.80. One class 18.5-20.5 is given. Therefore the initial class that includes the minimum value of ₹ 10.5 is 10.5-12.5 and the last class that includes the maximum value of ₹ 20.80 is 20.5 – 22.5. The frequency distribution is obtained as follows : Exclusive continuous frequency (1) Mid value of fourth class : Fourth class 16.5 – 18.5 ∴ Mid value = $$\frac{18.5+16.5}{2}=\frac{35}{2}$$ = 17.5 (2) No. of days during which the closing price of share is at the most ₹ 16.50 mean the number of days during which the closing price of share is less than ₹ 16.50 = 2 + 6 + 8= 16 days. (3) The number of days during which the closing price of share is at least ₹ 19.50 means the number of days during which the closing price of share is more than ₹ 19.50 = $$\frac{8}{5}$$ + 2 = 4 + 2 = 6 days (∵In 18.5-20.5, 8 days are uniformly distributed. Therefore in 18.5-19.5, 4 days are there.) Question 3. Owner of a factory has decided to produce 50 mixers used as household equipment, but the daily production of mixers changes due to variation in the number of workers. A variation in production of mixers with respect to a pre-decided number of production (100 units) during 40 days is recorded as under. Prepare an exclusive continuous frequency distribution having class length 6 and mid value of one of the classes as 3. Also prepare ‘less than’ and ‘more than’ cumulative frequency distributions. In the given data, the minimum value of change in the production of mixers is -10 and maximum value is 23. Class length = 6 and mid value of one class = 3 are given. Therefore lower limit of that class = 3 – $$\frac{6}{2}$$ = 3 – 3 = 0 and upper limit of that class = 3 + $$\frac{6}{2}$$ = 3 + 3 = 6 Thus, given class is 0-6. Therefore the initial class in the frequency distribution that includes the minimum value -10 is -12 to -6 and the last class that includes the maximum value 23 is 18 to 24. The frequency distribution is obtained as follows : Exclusive continuous frequency distribution showing the change in the production of mixers during 40 days ‘Less than’ type cumulative frequency distribution ‘More than’ type cumulative frequency distribution Question 4. The data regarding the height (in cm) of 30 students of a school are as under. Prepare an inclusive continuous frequency distribution of 6 classes and hence prepare ‘less than’ and ‘more than’ cumulative frequency distributions: On the basis of it, answer the following questions : (1) If participation in the NCC activities requires a minimum height of 160 cm then how many students are eligible to participate? (2) Find the number of students having height from 153 cm to 163 cm. (3) Find the maximum height of one-third of the students having minimum height. In the given data, minimum height of student is 141 cm and maximum height is 168 cm. ∴ Range = 168 – 141 = 27 cm Data is to be classified in 6 classes. ∴ Class length = $$\frac{27}{6}$$ =4.5-5 Therefore, the initial class in inclusive type of frequency distribution that includes the minimum value 141 is 140-144 and the last class that includes the maximum value 168 is 165-169. Inclusive continuous frequency distribution showing the height (in cm) of 30 students of a school ‘Less than’ type cumulative frequency distribution Height less than upper boundary point Cumulative frequency cf 139.5 0 = 0 144.5 0 + 2 = 2 149.5 2 + 8 =10 154.5 10 + 8 = 18 159.5 18 + 4 = 22 164.5 22 + 6 = 28 169.5 28 + 2 = 30 ‘More than’ type cumulative frequency distribution Height equal to or more than Cumulative frequency ‘ cf ’ 139.5 30 = 30 144.5 30 – 2 = 28 149.5 28 – 8 = 20 154.5 20 – 8 = 12 159.5 12 – 4 = 8 164.5 8 – 6 = 2 169.5 2 – 2 = 0 1. To participate in NCC height 160 cm is required. Therefore the students with height 160 cm or more can join NCC. The number of such students is (6 + 2) = 8. 2. In 150-154, there are 8 students ∴ In 153 – 154, the number of students = $$\frac{8}{5}$$ × 2 = 3.2 ≈ 3 In 155- 159, there are 4 students In 160-164, there are 6 students .-.In 160-163, the number of students = $$\frac{6}{5}$$ × 4 = 1.2 × 4 = 4.8 ≈ 5 Therefore the number of students whose heights are between 153 cm to 163 cm. The number of students in 153 – 154 + the number of students in 155-159 +the number of students in 160-163 = 3 + 4 + 5=12 students 3. One-third students = $$\frac{30}{3}$$ = 10 The height of 10 students having least height falls in the class. Therefore their maximum height =149 cm. Thus, the maximum height of $$\frac{1}{3}$$rd students having least height is 149 cm. Question 5. The students of a university were classified according to faculty and gender. 60 % of total 40,000 students were boys. The number of girls in engineering faculty was three times the number of girls in commerce faculty. 15% and 10% of the total number of university students were boys and girls respectively who belonged to medical faculty. 20 % of the total number of students in the university belonged to faculty of science and among these students, the number of girls were one-seventh of the number of boys. 7% and 17% of the total number of students of arts faculty were boys and girls respectively. 3.75 % of the total number of students of the university belonged to the commerce faculty and the proportion of boys and girls among them was 3: 7. Present the above data in an appropriate table. In the given data two attributes are : 1. Faculty: Engineering, Medical, Science, Arts, Commerce 2. Sex: Boys, Girls According to these two attributes table is prepared as follows : Table showing the number of students of a university according to their faculties and sex Faculty Sex Total no. of students Boys Girls Engineering 7750 3150 10900 Medical 6000 4000 10000 Science 7000 1000 8000 Arts 2800 6800 9600 Commerce 450 1050 1500 Total no. of students 24000 16000 40000 [Explanation: No. of boys = 40000 × $$\frac{60}{100}$$ = 24000 No. of girls = (40000 – 24000 =) 16000 No. of girls in medical faculty = 40000 × $$\frac{10}{100}$$ = 4000 = 4000 and No. of boys = 40000 x $$\frac{15}{100}$$ = 6000 ∴ No. of students in medical faculty = (4000 + 6000) = 10000 No. of students in Science faculty = 40000 × $$\frac{20}{100}$$ = 8000 Suppose, the number of boys = x = 7000 ∴ No. of girls = $$\frac{x}{7}$$ = 1000 Now, x + $$\frac{x}{7}$$ = 8000 x = 7000 ∴ No. of boys in Arts faculty = 40000 × $$\frac{7}{100}$$ = 2800 and No. of girls = 40000 x -yyr = 6800 ∴ The number of students in Arts faculty = 2800 + 6800 = 9600 No. of students in commerce faculty = 40000 × $$\frac{17}{100}$$ = 1500 ∴ The number of students In Arts faculty = 2800 + 6800 = 9600 No. of students in commerce faculty = 40000 × $$\frac{3.75}{100}$$ = 1500 The proportion of boys and girls is 3:7. ∴ No. of boys = $$\frac{3}{10}$$ × 1500 = 450 No. of girls = $$\frac{7}{10}$$ × 1500 = 1050 No. of girls in Engineering faculty = 3(1050) = 3150 No. of boys = 24000 – (6000 + 7000 + 2800 + 450) = 24000- 16250 = 7750 ∴ The number of students in Engineering faculty = 7750 + 3150 = 10900]
2022-07-05 11:57:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5042779445648193, "perplexity": 1538.2673829026471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104576719.83/warc/CC-MAIN-20220705113756-20220705143756-00720.warc.gz"}
https://eugene.readthedocs.io/en/latest/eugene/gettingstarted.html
# Getting started¶ At the core of eugene lies a simple outbreak model, which starts with number of index cases $$n$$. The user must also specify $$\mathcal{R}_0$$, $$k$$, the generation time between incidences $$D$$, the shape of the Gamma distribution parameterized by parameter gamma_shape, maximum number of days to simulate days_elapsed_max and the maximum number of cases beyond which to stop simulating max_cases. We can specify those parameters in code like so: import numpy as np import matplotlib.pyplot as plt np.random.seed(2020) parameters = dict( R0 = 2, # reproduction number k = 1, # overdispersion factor n = 1, # number of index cases D = 10, # generation time interval gamma_shape = 2, # gamma function shape parameter max_time = 90, # maximum simulation time days_elapsed_max = 52, # number of days from index case to measurement max_cases = 1e4 # maximum number of cases to simulate ) Now we can simulate 100 outbreaks with these initial parameters: from eugene import simulate_outbreak fig, ax = plt.subplots(figsize=(4, 3)) for i in range(100): times, cumulative_incidence = simulate_outbreak(**parameters) ax.semilogy(times, cumulative_incidence, '.-', color='k', alpha=0.2) ax.set_xlabel('Time [days]') ax.set_ylabel('Cumulative Incidence') fig.tight_layout() plt.show() Every epidemic curve starts at incidence of unity, and the cumulative incidence grows roughly exponentially, sometimes terminating with zero new cases before it reaches the end of the simulation domain (set by the days_elapsed_max parameter).
2022-10-04 01:13:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7588964700698853, "perplexity": 3666.790777220078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00396.warc.gz"}
https://gmatclub.com/forum/math-coordinate-geometry-87652-80.html
It is currently 23 Feb 2018, 04:07 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Math: Coordinate Geometry Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 43891 ### Show Tags 02 Sep 2017, 01:57 Arsh4MBA wrote: Hello Bunuel, Thanks a lot for the article. I have one doubt . Is it that a line with negative slope would definitely pass through Quadrant 2 and 4, and would pass through 1 or 3 depending on the value of x and y intersects? Yes. If the slope of a line is negative, the line WILL intersect quadrants II and IV. X and Y intersects of the line with negative slope have the same sign. Therefore if X and Y intersects are positive, the line intersects quadrant I; if negative, quadrant III. _________________ Intern Joined: 07 Dec 2016 Posts: 41 ### Show Tags 06 Sep 2017, 17:11 Hi Bunuel, Under heading Line equation in Co-ordinate Geometry - The equation of a straight line passing through points P1(x1,y1)P1(x1,y1) and P2(x2,y2)P2(x2,y2) is: $$\frac{y−y1}{x−x1}=\frac{y1−y2}{x1−x2}$$ I think it should be : $$\frac{y−y1}{x−x1}=\frac{y2−y1}{x2−x1}$$ This is because slope for two points is : $$\frac{y2-y1}{x2-x1}$$ Let me know if I am missing anything here. Math Expert Joined: 02 Sep 2009 Posts: 43891 ### Show Tags 06 Sep 2017, 20:34 1 KUDOS Expert's post BloomingLotus wrote: Hi Bunuel, Under heading Line equation in Co-ordinate Geometry - The equation of a straight line passing through points P1(x1,y1)P1(x1,y1) and P2(x2,y2)P2(x2,y2) is: $$\frac{y−y1}{x−x1}=\frac{y1−y2}{x1−x2}$$ I think it should be : $$\frac{y−y1}{x−x1}=\frac{y2−y1}{x2−x1}$$ This is because slope for two points is : $$\frac{y2-y1}{x2-x1}$$ Let me know if I am missing anything here. Both are the same: $$\frac{y2−y1}{x2−x1}=\frac{-(y1−y2)}{-(x1−x2)}=\frac{y1-y2}{x1-x2}$$ _________________ Manager Joined: 05 Oct 2016 Posts: 85 Location: United States (OH) GPA: 3.58 ### Show Tags 12 Sep 2017, 16:37 Can anyone guide me where i can practice only coordinate plane questions?? _________________ Kudos APPRECIATED! Math Expert Joined: 02 Sep 2009 Posts: 43891 ### Show Tags 12 Sep 2017, 20:00 1 KUDOS Expert's post SandhyAvinash wrote: Can anyone guide me where i can practice only coordinate plane questions?? Use our search engine to find questions from specific category: https://gmatclub.com/forum/search.php?view=search_tags _________________ Intern Joined: 20 Jun 2017 Posts: 7 ### Show Tags 18 Sep 2017, 21:10 Bunuel sir, I need coordinate Geometry in PDF. I want to download the attachment Math Expert Joined: 02 Sep 2009 Posts: 43891 ### Show Tags 18 Sep 2017, 22:26 Raj94* wrote: Bunuel sir, I need coordinate Geometry in PDF. I want to download the attachment Check here: https://gmatclub.com/forum/bunuel-signa ... 70062.html _________________ Intern Joined: 20 Dec 2017 Posts: 10 ### Show Tags 11 Feb 2018, 03:50 Really great stuff! I initially had some questions after reading through this book section but found that they were answered already (like the gradient formula where sometimes its y2-y1 and on other times its y1-y2). Nevertheless, wanted to thank you so much for this! =D Re: Math: Coordinate Geometry   [#permalink] 11 Feb 2018, 03:50 Go to page   Previous    1   2   3   4   5   [ 88 posts ] Display posts from previous: Sort by
2018-02-23 12:07:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6379762887954712, "perplexity": 5201.259809937484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814700.55/warc/CC-MAIN-20180223115053-20180223135053-00200.warc.gz"}
http://www.rendena100.eu/public/boussinesq/doc/html/index.html
TheBoussinesqModel  3.2.1 Boussinesq The program Boussinesq simulates the dynamics of water-table surface solved in a hillslope or a small catchment. The theory is based on the 2D Boussinesq Equation: $s\frac {\partial \eta} {\partial t} = \nabla \cdot \left[ K_S \, H (\eta,x,y) \, \nabla \eta \right]+Q$ where $$\eta$$ is the piezometric elevation (unknown), $$t$$ is time, $$\nabla$$ is the space gradient operator, $$H(\eta,x,y)$$ is the thickness of the aquifer which is a function of $$\eta$$ and space, $$Q$$ is a source term which also accounts for boundary conditions, $$K_S$$ is the saturated hydraulic conductivity and $$s$$ is porosity . Warning The Boussinesq Equation is solved with finite volume numerical methods according to Casulli, 2008 ( http://www3.interscience.wiley.com/journal/121377724/abstract?CRETRY=1&SRETRY=0 and http://onlinelibrary.wiley.com/doi/10.1002/wrcr.20072/references) The Maps of distributed quantiaties are distributed as vectors of double float numbers (DOBLEVECTOR data struct type ,in this case) Version 3.2.1 Date 2008-2009 (2013) Attention Boussinesq is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.
2018-04-25 04:31:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5140748023986816, "perplexity": 718.0292761056659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947693.49/warc/CC-MAIN-20180425041916-20180425061916-00166.warc.gz"}
https://guitarempire.com/12-string-electric-guitar-history-how-to-play-an-electric-guitar-for-beginners.html
How come a gibson les paul, an sg and es335 sound so different that people can tell them appart while blindfolded? If wood has nothing to do with it these guitars with the same pickups in them should sound exactly the same, yet these guitars have their own characteristics. I understand that the string in the magnettic field inducts an electric impulse thats the signal, but its the in way the string vibrates that the signal changes, like if you pkug soft or hard the signal is different, wouldn't it then be logical that the vibrating of the guitar body has an influence on the movement of the string? Hum in pedalboards is usually “ground loop hum.” You have two paths to ground, your audio ground and your power supply ground. You could use an expensive power supply with isolated grounds. But all you have to do is break one of the ground connections. You could disconnect the audio ground at one end of each of your patch cords. Or better, if you use one power supply, connect the hot and ground to only one of your pedals. Clip the ground wire on all the other pedal connections in your daisy chain. The power connections will then get their grounds through the audio grounds. No more hum The device can be mounted on and removed quickly from almost any flat-backed acoustic guitar, using a system of magnetic rails. The ToneWood-Amp is designed for guitars with either a magnetic or piezo pickup (bridge or soundhole), but the team behind the device say they're also putting together a “Technician-Free” pickup bundle for those guitars without a pickup. As a side note, many guitarists refer to the vibrato as “tremolo” or, worse yet, “whammy bar”. (I sometimes do, too, when my mouth is moving unaccompanied by my brain) Vibrato refers to varying the pitch while tremolo is varying the volume. Leo Fender himself is largely responsible for the misuse of the words. He called the bar on his guitars the “tremolo” and even had the tremolo effect on his amplifiers labeled as “Vibrato”. One of the most over looked and shockingly good guitars I have ever played in my 23 years of chopping wood. In their rich history there have been little misses, but over all Aria guitars are supreme to their competitor especially at the price point. My 1977 Aria les paul copy has at leased twice the balls as my buddy's 6 year old Gibson and tons more playability. Forget about comparing to epiphone, seriously. eBay yourself a Aria electric and you WILL be pleasantly surprised. Aria acoustics; If your reaching for a nylon, Aria makes some of the best classical guitars with a history of employing some of the most well noted artisans of the craft, such as Ryoji Matsuoka. Fine craftsmanship all around, built with quality woods and have a tendency to get better with age, laminated or not. As for steal strings, I've only played one to be honest, but this Martin 'lawsuit' was a work of art. Thank you. Just ask any savvy stompbox builder or low-tuned 7-string player: Sometimes the best way to add power to your low tones is to remove a bit of bass. That’s because the lowest frequencies in your signal disproportionately overdrive your amp and effects. Siphoning off just a bit of bass can add clarity and focus. At extreme settings, the filtering can produce sharp, squawking tones akin to those of a ’60s treble booster pedal (not a bad thing). If you’ve ever grappled with high-gain tones that make your amp fart out, here’s your flatulence remedy. This can all get a little tricky and can become overwhelming especially if you have never tackled this type of job before. If this is the case, I strongly suggest starting with one of the easier models in regards to wiring e.g. Telecasters are significantly easier to work on as the scratchplate will often be pre-loaded with pickups. However, if you purchase a kit guitar such as an LP or you want to upgrade your electrical components (which is often the case with an entry level kit) understanding some basics about guitar electronics is useful. These are the settings I use as my basic rack for adding rock guitar sounds in Cubase, and you might also find it handy as a point of departure, so it's worth saving as a track preset. To do this, right‑click in the audio track containing the 'rack', and choose 'Create Track Preset' from the context menu. When the Save Track Preset dialogue box appears, simply name it and save it: now you can call up your rack for any audio track in any Cubase project! The Marshall MG series are also strong contenders, a lot of players use them and they’re ideal for the kind of music you like. You see them in a lot of studios. Not a tube amp and all that, but perfectly serviceable and they have some onboard effects, which can be fun. I used a mic’d MG50 when I played in Kenny’s Castaways for a year or so in the house band, and people said I sounded great. Amp cost me $280 on sale I think. I found the sound of the MG superior to the Line6, but not so much that I’d pay a lot more money for it. If I had a gig where I needed options and didn’t already own the effects I needed, I’d have no problem using the Line6. Although not as dominating in amp modeling, Guitar Rig takes the top spot in our guitar effects software list. It leads the pack with its meticulously detailed effects modeling. Its 54 modeled effects closely follow the behavior of legendary stompboxes and studio racks. Even professionals are having a hard time picking out the real pedal against this guitar effect software in a blind test. Its versatile design allows you to chain effects together in virtually any manner, without the hassles of cables, space and budget constraints. It is truly a truck load of gear in one software package. Retail Price:$199.00 In a band and got your slot to wail? Think about it. Shredding scales is all well and good but the best songs and solos have structure, tempo changes and memorable licks. It may be a cliché, but listen to Jimmy Page’s solo in Led Zeppelin’s “Stairway to Heaven” – now that’s how you build-up to a solo. It may be your time to shine, but don’t just gush everywhere – think about structure and let your solos build and breathe. I became more and more frustrated with as my playing did not mach my ambitions at all. I tried to listen to records to figure out what was being played. I tried to come up with the proper techniques on how to play the riffs that I could hear. I tried to make my guitar and my playing sound the way it should. But, even after long hours, it always felt like I did not quite get there. What I really wanted, was to be a Rock Star!The written music available in the music stores was expensive and incomplete. There was nobody around who could make me understand what a power chord was, how to mute individual strings while letting others ring.  I was locked in my open chord basic folk guitar strumming background. I knew that I needed a totally new approach to become the lead, riff and chops playing blues pop and rock guitar player I wanted to be.  And there was no way that I could see how to simply snap out of my predicament……. I am now building several models which I offer as my signature work. I've always had a special affinity for archtop guitars, but as you'll see in this website, I will go wherever the creative impulse takes me. The instruments I am building now are a distillation of the best design ideas I've found in classic instruments, re-imagined and evolved into higher form and function, as fine tools for discerning artists. Being by nature rather sceptical, I have to admit to initially dismissing many of the recording methods in this article as 'studio snake oil', and because there was usually too little time during my own sessions to experiment with new ideas, I'd usually end up with an SM57 glued to the speaker grille by default. Taking the time out to trial the above techniques in the studio showed me quite how much I had been missing — not only much better raw recordings, but also tremendous extra flexibility at mixdown. But don't take it from me — listen to the audio examples for yourself and make up your own mind. If they don't expand your recording horizons, I'll eat my SM57... Since they present a finer break point at the neck end of the strings’ speaking length, narrower vintage-gauge frets are generally more precise in their noting accuracy. From this, you tend to get a sharper tone, possibly with increased intonation accuracy, plus enhanced overtone clarity in some cases, which could be heard as a little more “shimmer.” If you’re thinking these are all characteristics of the classic Fender sound, you’d be right—or they are, at least, until you change those vintage frets to jumbo. That's what I was thinking. Have you seen what those things go for when one does pop up for sale? It's nothing for those to go for close to $10k. That's insane for something non-vintage, but that's just my opinion. It's a bit excessive for what's essentially a two humbucker shredder, even if it is handmade over the course of nine years and the body is a piece of a 12th century Viking ship that's been soaked in mead for six centuries and aged to exquisiteness or whatever the fuck. I blame Misha Mansoor. He's got a bunch of guitar nerds all fucked up in the head now. Some guitarists and guitar makers avoid this by including an additional resistor, around 4.7kOhms, in series with the capacitor. This provides a minimum level of resistance, so the tone circuit is never at “zero” even when the knob indicates it. You can see in the chart that around 4kOhms (about “1” on the tone pot knob), there’s no hump in the midrange, just a very rapid falloff in the upper mids and treble frequencies. The history of signal modification isn’t just one of pleasing the ear through unconventional methods. It works both ways: Guitar effects have modified their users, just as much as their users and engineers have modified their sound. New effects can change a guitarist’s playing ability completely, concealing their technique as well as embellishing it. U2’s The Edge, for example, is known for his restraint of technique by embedding different rhythms within delay settings. This depends on personal preference; changing the order of drive pedals changes how they sound when used together. For instance, a clean boost placed before a heavy distortion or fuzz will result in a louder boosted signal hitting the heavier distortion circuit which in turn works that circuit harder and you get heavier distortion. If you place that clean boost after the heavy distortion, it will just make the original distorted sound louder. Experiment with different placement order and you will find your own preference. A notable line produced by Ibanez is the Artwood series, which has combined old world craftsmanship with modern manufacturing to create some pretty solid entry-level guitars; a great example of which is the AW54CEOPN. While the Ibanez AW54CEOPN is an acoustic-electric guitar, the main focus of its design was its acoustic tone. The guitar utilizes an open pore finish, which is intended to allow the guitar to resonate more freely by minimizing the amount of finish applied to it. It’s hard to say how effective this is in practice due to the guitar’s laminated back in sides, though there doesn’t seem to be any widespread complaints about the guitar’s tone. i just started using this book never having played before and am finding it totally easy to follow. the friendly narrative guides the reader through every step, explaining the most simple of terms and concepts clearly and concisely. and yes, the CD is funky and you can play along with it more or less straight away AND sound good, which keeps you motivated. This is easily the best multi effects pedal for metal, especially if you need an easy to use option with just the essentials packed in. Within the Valeton Dapper Dark Effect Strip, you have a built-in tuner, Higain effect designed for brutal distortion sounds, a lush Chorus to bring out those riffs and add more weight to your sound as well as a Delay effect with tap tempo to allow you to add everything from slap-back delay to long, drawn out echoes. Best of all you have a Boost pedal which throws in +12 dB of gain so you can stand out from the mix when you kick in to a solo or need a certain riff to really jump out. {"id": "451058", "categoryId":"site5LIAA", "name":"Electric, Acoustic and Bass Guitar Stand", "pageUrl":"/Musicians-Gear/Electric-Acoustic-and-Bass-Guitar-Stand.gc", "thumbnailUrl":"https://media.guitarcenter.com/is/image/MMGS7/Electric-Acoustic-and-Bass-Guitar-Stand-Black/451058000001000-00-180x180.jpg", "hasFeatures":"0", "isAccessory":"1", "message":"", "value":"9.99", "priceMin":"9.99", "priceMax":"9.99", "msrp":"24.99", "productVisibilityMSRP":"1", "restockPrice":"", "openBoxPrice":"", "clearancePrice":"", "isPlatinum":"0", "priceSavingsMaxPrice":"0.00", "priceSavingsMaxPercent":"0", "inventory":"10927", "brand":"Musician's Gear", "reviewStarImageUrl": "https://static.guitarcenter.com/img/brand/gc/cmn/Sprit-Sm-Stars.png", "reviewStarRating":"4.5", "reviewStarRatingInteger":"9", "reviewHowManyReviews":"360", "usedOrNew":"new", "discontinued":"0", "onOrder":"0", "clearance":"0", "canBeSold":"1", "accessoryCategories":"site5ODD,site5YGH,site5LQ", "stickerText": "Top Seller", "isVintage": "0", "outletonly": "0", "checksum":"12742902520", "priceVisibility": "1", "itemType": "New"} The one-string guitar is also known as the Unitar. Although rare, the one-string guitar is sometimes heard, particularly in Delta blues, where improvised folk instruments were popular in the 1930s and 1940s. Eddie "One String" Jones had some regional success.[citation needed] Mississippi blues musician Lonnie Pitchford played a similar, homemade instrument. In a more contemporary style, Little Willie Joe, the inventor of the Unitar, had a rhythm and blues instrumental hit in the 1950s with "Twitchy", recorded with the Rene Hall Orchestra. Here we have another Vintage Japanese GREAT find this example a beautiful pretty much exact copy of a vntage Martin D-45 ... this is a very High Quality built Lawsuit era Aria Pro II Model AW40. Made in Japan. From information on the Internet concerning dating these, the guitar's serial number would lead to 1976 manufacture. However, I could not find the AW40 model cataloged until the late 70's... but its a 76.. is consistent with all others. THIS is one beautiful guitar! it exudes fine detailed craftsmanship this was Aria's flagship dreadnought of this time period with D41-ish features. From an original vintage Aria catalog, AW40 features include: "Dreadnought sized, Solid Sitka Spruce top, Solid Brazilian Rosewood back and sides, bridge fingerboard and veneer headstock overlay with MOP logo, Marquetry Purfling" ( Top looks to be solid with the sides & back appears to me to be laminated )The catalog can be viewed at matsumoku.org, a site that deals with the history of Matsumoku made instruments like Aria, Electra and others. This guitar has the Martin classic snowflake mother of pearl inlays, abalone binding and rosette, and fully bound headstock and gorgeous rosewood fingerboard. Headstock also has a Rosewood overlay. The bookmatched rosewood on the back side is especially easy on the eyes. The guitar is all original with no repairs and with original tuning keys. It is in JVG Rated condition as excellent used vintage 8.8/10 WoW...its 35 years old and the woods have opened up now like fine wine the tone is richer & mellowed as only time can provide. No cracks or repairs ever. It plays very well with good action and has a nice warm rich tone. The Neck is arrow straight. Frets have minimal wear with no buzzing anywhere on the fingerboard....this is the one! At this link you can view more pictures of this guitar please cut & paste the following link: https://picasaweb.google.com/gr8bids/AriaPro2AW40D45BrazilianRosewood?authkey=Gv1sRgCOmS2c3RvMGpUg#slideshow/5609409732594635106. So Rad...It's ok...To think that we were going to get all the campaigns and multiplayer for all the Halo's was amazing, and the game itself when it works is amazing just like it always has been, but I bought my Xbox One just for this game and the fact that it was broken for more than half a year is a shame and honestly unfair to the consumer, I still give it 3 stars since it works decently now but it lost its potential to be an amazing game....Lots of people seem to be having issues with multiplayer and campaign achievements; however, I have not noticed any campaign issues other than one time when I accessed a terminal it would not let me resume my game but after a restart I found I had just hit a checkpoint so no work was lost. The plectrum, or flat pick, is another key piece of essential equipment. For electric guitars, it tends to be a thin piece of plastic, metal, shell or other material shaped like a teardrop or a triangle. There are also thumb picks mounted on rings and finger picks on the player's fingertips; you'll see electric guitarists using both of these as well as a standard pick. This is a wide range of electric guitar series that have a stylish body and deliver high-quality sound. Cort guitars are fabricated by South Korean manufactures and have been on the market since 1973. Those who are keen on the appearance of the guitar can opt for this brand of electric guitar. This is an electric guitar that is available at an affordable price range between 10,000 to 40,000 INR. The SG Standard is Gibson’s all-time best-selling guitar. It was conceived in 1961 and originally released as the new Les Paul. It featured distinct horn-shaped cutaways, and the neck joint was moved three frets, which made the guitar lighter and allowed easier upper fret access. In addition to these changes, the body was slimmer than the Les Paul Standard and the neck profile was more slender. However, with Mr. Paul preferring the sturdier design elements of his original model and due to contractual complications, his name was ultimately removed. Where Les Paul saw a mutation of his original design, others saw genius—from ’63 on, the Les Paul name was removed and the SG, or “Solid Guitar,” was born. “The tone thing is amazing because you can have one rig, have three different guitar players, and each guy can play the same exact thing and it’s going to sound different,” says L.A. Guns guitarist Stacey Blades. “It’s all in the hands.” Waara from Line 6 agrees. “Any guitar player will tell you, at the end of the day, it’s in your hands and you will sound like you will sound,” he says. The percentage of influence the hands wield is shockingly high. Guitar models currently include the Master Class, American Series, Oregon Series, Cascade series, Atlas series, Passport Plus, and Passport, as well as 12-string models and Bass models. The Voice series, reviewed by Guitar Player in 2012, was praised for the quality of construction and various innovative elements, including a “Tru-Voice Electronics System” which, according to Dave Hunter, “for live performance … comes closer to a seamless acoustic-to-amplified transition than virtually any other flat-top I’ve played.”[2] Some people like to play the two notes on 5th and 4th strings with a small barre with the 3rd finger. It's O.K. to do that, but I think using two fingers gives you a better finger position on the notes; you'll get a better sound that way, it makes it easier to change chords most of the time and easier to get all the thin strings muted. I strongly advise to learn it this way, and then if you still prefer to use the little barre you have the option of choosing whichever one works best in any situation! “Well, the legends didn’t use pedals.” Whenever somebody says something like this, and you ask them to whom they are referring, they’re often misinformed and factually wrong. “Jimmy Page”. Uh, ever see him use a Tone Bender Mk II? “Jimi Hendrix.” Please feel free to complete a Harry Potter novel while I finish laughing. “Stevie Ray Vaughan.” Ibanez and Maxon should retire a green Tube Screamer colored banner with his name hanging from their company rafters. This list goes on and on. Yes, there are lots of cool dudes back in the old times who didn’t use pedals to help them create some classic tones, but once they had the chance, they chose to. Many web surfers contact me looking for a wiring diagram for an unusual / no name / import guitar after having no luck online. And sometimes you aren't going to find it, however, if you have an electric guitar that is similar to lets say a Strat ... it has 3 single coil pickups (and they are 2 wire pickups), one 5 way switch, 2 tone pots and 1 volume pot then you can simply use a Strat wiring diagram. It's often easiest to think of the instrument in terms of components not brand. The Hi-Flier guitar, which was possibly built in the Matsumoku factory, underwent multiple phases during the course of its production. Each of the Hi-Flier’s four manufacturing phases came with a variety of feature changes, ranging from simply switching the color of the pickguard to actually fitting the guitar for humbuckers rather than the P90-style pickups it originally came with. # Flanger effects simulate the studio trick of repeatedly putting your thumb on a tape recorder’s reel for a second and then letting the reel (and music’s pitch) catch back up while a dry (unaffected) signal plays alongside. Flangers usually have a depth setting, which controls the intensity of the effect, and a rate control that adjusts the speed of the cycles. “Rock guitarists are incredibly conservative and traditional,” says Dr. Millard. “We like to think of ourselves breaking all the bonds and we go back to the fifties when rock and roll was revolutionary. It is not revolutionary. It is very traditional, very conservative, and musicians are really stubborn to change. We have a cultural understanding that old is better than good.” Some effects, such as flanger, wah-wah, and delay, are obvious to the ear. But others, such as compression, reverb, and even distortion, are core elements of your tone, so you might not always notice these as “effects.” But used artfully, or sometimes even just correctly, they can take you to tonal utopia. Even if your personal style doesn’t call for mind-altering sound, you can still improve your sound by using effects. Our guide to guitar strings, the hope and savior of beginners across the world. We're going to cover the types of guitar strings, how they're made, the best brands, the standard gauges, how to pick the right ones for your instrument and style, what to expect in terms of cost, and much more. Take a ride with me through Ledger Note's guitar string guide... Second, just like removing the pickup selector, you will need to access the back electronics cavity or remove the pickguard. Refer to the pickup selector section for more details. Take note of what wires are soldered to what lugs before you remove the pot. If you are not familiar with electric guitar wiring, I suggest that you draw a picture of the selector and label the soldered wires. Once you know where everything has be wired, you can cut the wires close to the lugs and remove the old pot. Then you can bolt the new pot in place, solder the wires on the lugs, replace the cavity covers or pickguard, and replace the knob. For more information about how to solder wiring, see the soldering page. The tuner goes first. This one is pretty easy. It doesn’t want to hear an effected signal; it wants to see the direct input from the guitar. Another reason for putting the tuner first is that if you’re using any true-bypass pedals, the TU-3 will give them a buffered signal, which will protect your tone from loss of signal in the cables when other pedals are off. This is another one of the reasons there as so many TU tuners in pedalboards worldwide, even ones using nothing else but boutique true-bypass stompers. Due to the acoustic, aesthetic and processing properties (workability, finishing, joints) the wood and the ligno-cellulose composites are the most valued materials for the musical instruments' construction. The guitar is made up of a complex structure, formed by a vertical wall in a curve shape (technologically named "sides") and two faces made up of ligno-cellulose plates, so that it should... [Show full abstract] The guitar measures 41 inches in length, and it comes with a 25.75-inch scale and 20 frets for various playing techniques. You also get strong D’Addario strings for reliable performances every time, as well as enclosed die-cast gold tuners, so you never play an off note. This dreadnought guitar features a cutaway so you can easily practice finger techniques on the higher frets. Clapton himself has repeatedly called Guy “the greatest living guitarist.” Hendrix literally knelt at Buddy’s feet in the late Sixties, the better to study his riffs. Guy’s secret? He combines an old-time blues feel with the technical facility of a modern guitar player. He was a youngster at the legendary Chess Records in early Sixties Chicago. Fresh up from Lettsworth, Louisiana, Guy was some 20 years junior to giants like Muddy Waters and Howlin’ Wolf, yet old enough and gifted enough to share the studio with them. Quality replacement pot from Bourns. Knurled 1/4" shaft fits most knobs. Low torque, carbon resistive element, great replacement in many applications using passive humbucker or single-coil pickups. Note that length of threaded part of shaft is 3/8" - measure to make sure that this is long enough for your application, especially if the pot mounts through the wooden guitar body. (This pot will not work on Les Pauls, for example). 250K, Special A2 taper preferred by guitar and bass players. Nut and washer included. Note: threaded bushing diameter is 3/8", like most 24mm "quarter-sized" pots. I started using cobalt .010 and I've found they have plenty of clarity and bite. Please keep in mind there are many factors going into your sound. Amp, guitar pickups, strings, pick type, etc. Don't be disappointed if you get some premium strings that don't change your sound if your pickups can't pick up the movement very well. Start at a regular light. .010 is plenty flexible, and they won't break as often as a 8 or 9. Don't get caught up in the rookie mentality of "THIS is what kind of guitarist I will be, so I need everything to fit that." Experiment with different sizes and types. ESP is another Japanese guitar brand that makes this top 10 list with its many artist endorsements and actual user recommendations. Founded in 1975, it started as a builder of custom made parts for guitarists who want to personalize their existing instruments. Now ESP is known worldwide for their hot-rodded versions of popular guitar shapes, and other unique and eccentric designs, built to please modern rock and metal players. Some bass players cannot use a bass combo amp, either due to strict noise and disturbance rules in their apartment, lack of space to store a combo amp (if they live in a small room) or due to the need for a set-up which can amplify multiple types of instruments and/or voice. Alternatives to buying a bass amp for people who have noise or space constraints include a headphone amplifier or a micro-practice amp which includes a headphone jack (on bass amps, connecting headphones to a headphone jack automatically turns off the main loudspeaker). Multi-instrumentalists and bassist-singers can consider a keyboard amplifier, a small PA system, or some models of acoustic instrument amplifiers which include bass as one of the instruments which can be used; all of these options have full-range speakers that can handle the bass range. Overdrive pedals are very different to distortion pedals, and without getting too technical, they drive/push your guitar signal harder rather than changing the sound completely like a distortion pedal does. An overdrive pedal retains a lot of the original sound of your guitar and amp but pushes the amplifier harder to give it a heavier, thicker signal. They’re ideally used with valve/tube amps as they push the tubes to their limit and allow them to bring out the more natural distortion that tube amps are so renowned for. Incidentally, we wrote about the best tube amps for home use here, but if you wanted some great practice amps, we also wrote about them here too! ### On guitars with bound fingerboards, shrinking of the binding can produce a gap large enough to catch the treble E string when pulling it over the edge. If only a few our present I will fill the gap to eliminate the problem. If the binding shrinkage has introduced gaps at every fret, the board should be re-radiused to eliminate all gaps and re-fretted. PRS: One of the best guitar brand one can go for (if they don’t want to go for the custom-built route). Their guitars look beautiful and sound buttery smooth. They have the most beautiful looking tops and inlay among non-custom guitars. The craftsmanship and attention to detail on PRS guitars is just exquisite. Of course they do have their custom shop called Private Stock and the Private Stock guitars are so gorgeous and meticulously built that anyone who sees them will be awestruck by their beauty, not to forget the sound of those guitars are like the voice of angels. By 1947 with the release of ‘Call it Stormy Monday’ – his biggest hit, Walker preferred playing with a smaller band lineup of six members. This size of band bridged the gap between the solo rural blues players like Robert Johnson or Charley Patton and the larger big band ensembles of the 20’s and 30’s. It became popular and adopted by bands that would find success over the next few decades. I spoke with Matt “M@” Picone, of Fractal Audio, about the increasing use of modelers for today’s biggest acts. Their flagship modeler, the Axe-FX II XL+, is used by bands as diverse as U2, King Crimson, and Taylor Swift. Increasing numbers of top-level guitarists are discovering Fractal’s dozens of effects/amps/cab/microphone models and the obsessive tweakability inherent in their designs. In the credits of Fractal’s products, Matt Picone is listed alongside Cliff Chase, the company’s founder, president and DSP/Hardware engineer, as contributing to “everything else.” He says that title suits him because it spans a range of duties including support, artist relations, brand development, sales, marketing, PR, sound design, docs & manuals, e-commerce, business development, infrastructure and much more. Their products are not just for ultra rock stars, as Matt explains: • Now let's add some slap-back room delay. In the seventh insert (which, incidentally, comes post‑fader in Cubase, as does insert eight), go to Delay/StereoDelay. In the left channel, try setting Delay to 1/16T, Feedback to 6.5, Lo to 50, Hi to 15000, Pan to ‑100, and Mix to 20, and enable Sync, Lo Filter, and Hi Filter. Use the same values for the right channel, but with Delay at 1/16, Feedback at 7.3, and Pan at 100. ## The Effect:Distortion is one of the most popular and desired guitar pedal effects, especially among rock, hard-rock and metal players, The Kinks, Jimmy Hendrix, Metallica, to name a few. Prior to the introduction of effect pedals on the market, Distortion was mostly achieved by forcing an overwhelming amount of electricity passing through a guitar amp’s valves. Nowadays this is no longer necessary. Arguably one of the most famous and newbie friendly option and at the same time prime example for a distortion pedal is the classic Electro-Harmonix SOULFOOD. Presuming it is at least theoretically possible to digitally document in computational language every nuance of Eddie Van Halen's performance, the other aspect on the rendering side of the best-most-real equation is the guitar - from the pick or fingers on the strings, to the resonance of the wood body, the dynamics of pickups, the amps, the effects and such other processing gear. This was my first attempt on building pedal. Now I'm hooked. It was such a joy putting it all together and quite a learning experience. I cannot emphasize on reading/studying the instructions thoroughly. I would rate the included instructions a 10, a 5 STAR. Very clear and easy for a novice pedal builder to understand and walk through. Very well illustrated as well. Take your time as you can easily overlook soldering connections. The main problem I encountered was a shorting problem. The two soldering terminals along each side of the tube socket were located very close to the tube base socket and volume/gain pots. Follow the instructions by running a wire between the volume and gain pots, as well as the tube socket. Once, I've addressed this problem, it was clear sailing from there. The electric guitars have to be plugged in for sound to be produced. A cable and an amplifier are a must for them to produce sound. They are largely dependent on some electronic pickups, having between one and three pickups on their bodies, for them to produce this sound. They are relatively much lighter and have lighter gauge strings when compared with their acoustic counterparts. It is therefore a better option for the small statured or small-handed players. Getting comfortable to hold a guitar or fret the notes is quite physically challenging when working with the acoustic guitars than with the electric types. Yes and he lost the finger tips on his left hand and attached makeshift fingers out of thimbles but managed to play some of the greatest evil licks ever. I love Eddie but he screwed up Van Halen terribly by getting rid of Dave and turning it into a girl band. Duane was awesome and highly skilled and a sought after studio musician. Clapton is the master. These brands consist of guitars that are made up of high quality material including hardware stuffs, wood, etc with interesting features. Well, it not so that only expensive guitars are good for the learners. Music is such a wonderful pleasure that can make any one happy form inside. But all this is possible through excellent music instruments including guitar. Nothing can be powerful in sad situations than music played by guitar. The brands provided below are the most prominent guitars brands at economical prices. So, it is essential to select a perfect guitar which not only make your understand easily but also match to the style and requirements of your lifestyle. Some beginners think to choose a low quality and less expensive brand guitar but it’s all their misunderstanding. Here we have a very nice example of the Yamaha Red Label fg230-12... This example is in very good - excellent original condition. The woods used on this guitar are of a very high grade ... spruce top, Honduran Mahogany back, sides and neck please see pics for the details but very nicely grained woods!... workmanship is impeccable... the guitar plays like a real with very good action and the intonation is set dead on... The neck is solid Mahogany and is slightly beefy..I love the feel of this guitar and when you hear is you will be in 12 string heaven... no cracks or repairs ... the condition is vintage used its about 40+ years old you know ...with several minimal scratches but still overall a very beautiful vintage guitar. The wood has aged and mellowed with time to yield a wonderful rich tone only a 30+ year old quality instrument can offer. This one has that quaity rich sound along with the playability with the right aging now and with its beauty ...its a no brainier... Also available is a cool$100 vintage hard shell case see pics Thanks for your interest!. Among the favorite brands of Gretsch lie the signature variants Brian Setzer and Chet Atkins models. Whereas, its Jet and Duo Jet are equally worthy. All these models are aimed explicitly at Jazz. In fact, you can think of them for Jazz as what you call Jackson for metal. For intermediate and pro players looking for affordability, its Electromatic Series is the desired option. What makes this one of the best electric guitar amp for beginners is Peavey’s TransTube preamp technology which provides a realistic tube amp tone and response, with the price and stability of a solid state amp – the best of both amp styles. Loud enough to rock, yet the headphone jack allows you to rock in isolation without disturbing others. The line in lets you plug in a CD player or mp3 player to jam with your favorite bands. It currently retails for $79.99. The process of building our kit guitars and basses is straight forward and requires little experience in woodworking or in instrument building. The entire instrument can be assembled with a few simple tools. Setting the instrument up for your playing style is also straightforward. We will guide you though the basic process in our instruction manual. For more complex or particular setup requirements, we suggest that you work with a professional for setup - just as you would with any instrument that you purchase. Some Craigslist and EBay sellers have been claiming the 500 and 600-series Kents are made by Teisco. I think we’ve shown that that’s not the case. Some sellers also describe those early Kents as having “Ry Cooder” pickups. As most of you know, Ry Cooder is an incredibly talented multi-stringed-instrument musician. David Lindley, another great talent, gave him a pickup from an old Teisco guitar. The photo at left is exactly like it. Cooder put the pickup into one of his Stratocasters and liked the sound so much that he got another one and put it into another Strat. These pickups are also described as “gold foil” pickups. There are variations in the pattern of cut-outs on the chrome covers of different pickups. I don’t know if the others sound any different, but if I were looking for a “Ry Cooder Pickup”, something like the one pictured here is what I would be looking for. The pickups have become worth more than the guitars they are on, consequently, as the guitars are bought up and trashed for their pickups, their prices are going to rise. { "thumbImageID": "PM-TE-Standard-Travel-Acoustic-Electric-Guitar-Natural/K36613000001000", "defaultDisplayName": "Fender PM-TE Standard Travel Acoustic-Electric Guitar", "styleThumbWidth": "60", "styleThumbHeight": "60", "styleOptions": [ { "name": "Natural", "sku": "sku:site51500000138832", "price": "699.99", "regularPrice": "699.99", "msrpPrice": "700.01", "priceVisibility": "1", "skuUrl": "/Fender/PM-TE-Standard-Travel-Acoustic-Electric-Guitar-Natural-1500000138832.gc", "skuImageId": "PM-TE-Standard-Travel-Acoustic-Electric-Guitar-Natural/K36613000001000", "brandName": "Fender", "stickerDisplayText": "Top Seller", "stickerClass": "", "condition": "New", "priceDropPrice":"", "wasPrice": "", "priceDrop": "", "placeholder": "https://static.guitarcenter.com/img/cmn/c.gif", "assetPath": "https://media.guitarcenter.com/is/image/MMGS7/PM-TE-Standard-Travel-Acoustic-Electric-Guitar-Natural/K36613000001000-00-60x60.jpg", "imgAlt": "" } ] } Typical modern Telecasters (such as the American Standard version) incorporate several details different from the classic form. They typically feature 22 frets (rather than 21) and truss rod adjustment is made at the headstock end, rather than the body end, which had required removal of the neck on the original (the Custom Shop Bajo Sexto Baritone Tele was the only Telecaster featuring a two-octave 24-fret neck). The 3-saddle bridge of the original has been replaced with a 6-saddle version, allowing independent length and height adjustment for each string. The long saddle bridge screws allow a wide range of saddle bridge positions for intonation tuning. The stamped metal bridge plate has been replaced with a plain, flat plate, and the bridge grounding cover (which, while helping with the shielding, impedes players who like to mute strings at the bridge with the side of the palm, and makes it impossible to pick near the saddles to produce the characteristic Telecaster ‘twang’) has been discontinued for most models. Also different from the original is the wiring: The 3-way toggle switch selects neck pickup only in the first position, neck and bridge pickups together in the second position, and bridge pickup only in the third position. The first knob adjusts the master volume; the second is a master tone control affecting all the pickups. When jazz guitar players improvise, they use the scales, modes, and arpeggios associated with the chords in a tune's chord progression. The approach to improvising has changed since the earliest eras of jazz guitar. During the Swing era, many soloists improvised "by ear" by embellishing the melody with ornaments and passing notes. However, during the bebop era, the rapid tempo and complicated chord progressions made it increasingly harder to play "by ear." Along with other improvisers, such as saxes and piano players, bebop-era jazz guitarists began to improvise over the chord changes using scales (whole tone scale, chromatic scale, etc.) and arpeggios.[2] Jazz guitar players tend to improvise around chord/scale relationships, rather than reworking the melody, possibly due to their familiarity with chords resulting from their comping role. A source of melodic ideas for improvisation is transcribing improvised solos from recordings. This provides jazz guitarists with a source of "licks", melodic phrases and ideas they incorporate either intact or in variations, and is an established way of learning from the previous generations of players. By 1939, Supros had grown again. The ’38 line was essentially intact with the addition of a number of new resonator acoustics. New was the No. 23 Supro Arcadia Guitar, a sunburst birch-bodied resonator made by Harmony. This had a simple nickel coverplate with two concentric circles of round holes, a slightly-rounded head with an oval Supro metal logo plate. The fingerboard had four dot inlays, the body two f-holes. Cost was$22.50. As a guitarist with a complete understanding of the vintage instruments he worked on, Novak wasn't completely comfortable with what any one instrument was capable of delivering. He wanted to combine all the features of his old favorites while adding design twists that would give him everything he was looking for in an electric guitar. This led to the invention of his patented fanned-fret fingerboard, which gives an instrument combined scale lengths. Why is Mesa Boogie so low?! Have Mesa Boogie ever made a bad amp? Look how many guys endorse their gear. Have you ever tried a Dual Rectifier or Mark V? It will tear you to shreds. They are AMAZING amps. Best part, they're all tube. Line 6, why the hell are they fifth. Why are they in the top 15? They are nothing but crap digital rubbish. Play a real amp like a Mesa Boogie, line 6, pft. Mesa Boogie is the best amp brand by far. There was a question from Benhur about Cort. If you lived in England you may know them better. They are an Indonesian company who builds many of the lower price point guitars for the big names like G&L and Fender just to name a. Few. Lower price doesn’t always mean less quality. Cort has a following in their own skin, and many with other well known names just may not know they are playing a Cort. Effects Pedals are electronic devices that modify the tone, pitch, or sound of an electric guitar. Effects can be housed in effects pedals, guitar amplifiers, guitar amplifier simulation software, and rackmount preamplifiers or processors. Electronic effects and signal processing form an important part of the electric guitar tone used in many genres, such as rock, pop, blues, and metal. All these are inserted into the signal path between an electric instrument and the amplifier. They modify the signal coming from the instrument, adding "effects" that change the way it sounds in order to add interest, create more impact or create aural soundscapes. During the Advanced Electronics class students will build a simple low impedance booster by hand, from paper to breadboard, to a point-to-point wired circuit board.  The Booster can be put into a guitar or other type of enclosure.  In addition, Scott will familiarize students with his ‘harness wiring’ tool, that is available online by visiting Guitar Modder. Legend has it that funkadelic's "Maggot Brain," the 10-minute solo that turned the late Eddie Hazel into an instant guitar icon, was born when George Clinton told him to imagine hearing his mother just died – and then learning that she was, in fact, alive. Hazel, who died of liver failure in 1992 at age 42, brought a thrilling mix of lysergic vision and groove power to all of his work, inspiring followers like J Mascis, Mike McCready and Lenny Kravitz. "That solo – Lord have mercy!" says Kravitz of "Maggot Brain." "He was absolutely stunning." { "thumbImageID": "RA-090-Concert-Cutaway-Acoustic-Electric-Guitar-Green-Blue-Burst/J17594000005000", "defaultDisplayName": "Rogue RA-090 Concert Cutaway Acoustic-Electric Guitar", "styleThumbWidth": "60", "styleThumbHeight": "60", "styleOptions": [ { "name": "Blue Burst", "sku": "sku:site51500000029405", "price": "149.99", "regularPrice": "149.99", "msrpPrice": "300.00", "priceVisibility": "1", "skuUrl": "/Rogue/RA-090-Concert-Cutaway-Acoustic-Electric-Guitar-Blue-Burst-1500000029405.gc", "skuImageId": "RA-090-Concert-Cutaway-Acoustic-Electric-Guitar-Blue-Burst/J17594000003000", "brandName": "Rogue", "stickerDisplayText": "Top Seller", "stickerClass": "", "condition": "New", "priceDropPrice":"", "wasPrice": "", "priceDrop": "", "placeholder": "https://static.guitarcenter.com/img/cmn/c.gif", "assetPath": "https://media.guitarcenter.com/is/image/MMGS7/RA-090-Concert-Cutaway-Acoustic-Electric-Guitar-Blue-Burst/J17594000003000-00-60x60.jpg", "imgAlt": "" } , { "name": "Green Blue Burst", "sku": "sku:site51500000029407", "price": "89.99", "regularPrice": "149.99", "msrpPrice": "300.00", "priceVisibility": "1", "skuUrl": "/Rogue/RA-090-Concert-Cutaway-Acoustic-Electric-Guitar-Green-Blue-Burst-1500000029407.gc", "skuImageId": "RA-090-Concert-Cutaway-Acoustic-Electric-Guitar-Green-Blue-Burst/J17594000005000", "brandName": "Rogue", "onSale": "true", "stickerDisplayText": "On Sale", "stickerClass": "stickerOnSale", "condition": "New", "priceDropPrice":"", "wasPrice": "", "priceDrop": "", "placeholder": "https://static.guitarcenter.com/img/cmn/c.gif", "assetPath": "https://media.guitarcenter.com/is/image/MMGS7/RA-090-Concert-Cutaway-Acoustic-Electric-Guitar-Green-Blue-Burst/J17594000005000-00-60x60.jpg", "imgAlt": "" } , { "name": "Red", "sku": "sku:site51500000029406", "price": "89.99", "regularPrice": "149.99", "msrpPrice": "300.00", "priceVisibility": "1", "skuUrl": "/Rogue/RA-090-Concert-Cutaway-Acoustic-Electric-Guitar-Red-1500000029406.gc", "skuImageId": "RA-090-Concert-Cutaway-Acoustic-Electric-Guitar-Red/J17594000004000", "brandName": "Rogue", "onSale": "true", "stickerDisplayText": "On Sale", "stickerClass": "stickerOnSale", "condition": "New", "priceDropPrice":"", "wasPrice": "", "priceDrop": "", "placeholder": "https://static.guitarcenter.com/img/cmn/c.gif", "assetPath": "https://media.guitarcenter.com/is/image/MMGS7/RA-090-Concert-Cutaway-Acoustic-Electric-Guitar-Red/J17594000004000-00-60x60.jpg", "imgAlt": "" } , { "name": "Black", "sku": "sku:site51416325129147", "price": "89.99", "regularPrice": "149.99", "msrpPrice": "249.99", "priceVisibility": "1", "skuUrl": "/Rogue/RA-090-Concert-Cutaway-Acoustic-Electric-Guitar-Black-1416325129147.gc", "skuImageId": "RA-090-Concert-Cutaway-Acoustic-Electric-Guitar-Black/J17594000002000", "brandName": "Rogue", "onSale": "true", "stickerDisplayText": "On Sale", "stickerClass": "stickerOnSale", "condition": "New", "priceDropPrice":"", "wasPrice": "", "priceDrop": "", "placeholder": "https://static.guitarcenter.com/img/cmn/c.gif", "assetPath": "https://media.guitarcenter.com/is/image/MMGS7/RA-090-Concert-Cutaway-Acoustic-Electric-Guitar-Black/J17594000002000-00-60x60.jpg", "imgAlt": "" } , { "name": "Natural", "sku": "sku:site51416325129042", "price": "89.99", "regularPrice": "149.99", "msrpPrice": "249.99", "priceVisibility": "1", "skuUrl": "/Rogue/RA-090-Concert-Cutaway-Acoustic-Electric-Guitar-Natural-1416325129042.gc", "skuImageId": "RA-090-Concert-Cutaway-Acoustic-Electric-Guitar-Natural/J17594000001000", "brandName": "Rogue", "onSale": "true", "stickerDisplayText": "On Sale", "stickerClass": "stickerOnSale", "condition": "New", "priceDropPrice":"", "wasPrice": "", "priceDrop": "", "placeholder": "https://static.guitarcenter.com/img/cmn/c.gif", "assetPath": "https://media.guitarcenter.com/is/image/MMGS7/RA-090-Concert-Cutaway-Acoustic-Electric-Guitar-Natural/J17594000001000-00-60x60.jpg", "imgAlt": "" } ] } We began the process by creating a 'short-list' of brands that have amps selling in the sub $1000 price range with amps that have strong enough ratings to be short-listed for any of our other electric guitar amp guides. This gave us the following 22 brands to consider: Blackstar, Boss, Bugera, California Tone Research, DV Mark, Egnater, EVH, Fender, Hughes & Kettner, Ibanez, Laney, Line 6, Marshall, Orange, Peavey, PRS, Randall, Roland, VHT, Vox, Yamaha and ZT. In 1968, Jimi Hendrix talked about his love for a Houston blues luminary who wasn't known outside the region: "There's one cat I'm still trying to get across to people. He is really good, one of the best guitarists in the world." Albert Collins, who died of lung cancer in 1993, played with his thumb and forefinger instead of a pick to put a muscular snap into his piercing, trebly solos. His fluid, inventive playing influenced Hendrix, sometimes overtly: Jimi liked Collins' sustain in the song "Collins Shuffle" so much that he used it on "Voodoo Chile." Buy a Kay online at our main web site, call our store or visit our Chicago guitar shop in person and check out the new Kay Vintage Reissues. We're not one of those guitar super stores, you will find we're friendly, knowledgeable and easy to work with. We sell guitars world wide and we want to earn your business so please don't hesitate to contact us for our best Kay prices. We ship Kay Basses and Guitars to Canada, Australia, UK, Europe, Japan and other locations. I am now building several models which I offer as my signature work. I've always had a special affinity for archtop guitars, but as you'll see in this website, I will go wherever the creative impulse takes me. The instruments I am building now are a distillation of the best design ideas I've found in classic instruments, re-imagined and evolved into higher form and function, as fine tools for discerning artists. Boutique quality instruments, available at prices that the masses can afford. Andrew strives to innovate with what he does with his instruments, from his unique bracing techniques, to the selection on soundboard woods to find the best wood grains for sound production. From his top-of-the-line custom made creations, all the way down to his Production series of guitars, you get so much bang for the buck. Description: Body: Maple - Body Construction: Semi-Hollow (Chambered) - Neck Wood: Maple - Fingerboard: Rosewood - Frets: 20 - Inlay: Dot - # of Strings: 6 - Scale Length: 25" (64cm) - Headstock: 3+3 - Bridge: Adjustable - Bridge Construction: Rosewood - Cutaway: Double - Hardware: Chrome, 2x Volume Control, 2x Tone Control, 3-Way Switch, Kluson Tuners - Pickups: Harmony - String Instrument Finish: Goldburst, Redburst Hertz Guitar is a well known brand, which manufactures high quality guitars. The company was originated from Shanghai,China and North Korea. Their musical instruments were introduced on September 2009. They offer world class quality instruments from world class branded production houses. They maintain international standard. It mainly focuses on musical instruments as well as accessories. They manufacture a wide range of guitar. Available at below Rs. 12,040/- (approx). ### Fortunately I did some research, performed some trial and error experimentation on my own semi-hollow (a very nice Epiphone Dot) and found what I consider to be the best way to wire up a hollow body guitar. You won’t need any uncommon tools or equipment – just a wrench set (or an adjustable wrench), plenty of wire, a pair of needle-nose pliers, a soldering iron, and a bit of patience. I’ve included plenty of pics to help illustrate each step. {"eVar4":"vintage: guitars","eVar5":"vintage: guitars: electric guitars","pageName":"[gc] vintage: guitars: electric guitars: lyle","reportSuiteIds":"guitarcenterprod","eVar3":"vintage","prop2":"[gc] vintage: guitars: electric guitars","prop1":"[gc] vintage: guitars","evar51":"default: united states","prop10":"brands","prop11":"rickenbacker","prop5":"[gc] vintage: guitars: electric guitars","prop6":"[gc] vintage: guitars: electric guitars","prop3":"[gc] vintage: guitars: electric guitars","prop4":"[gc] vintage: guitars: electric guitars","channel":"[gc] vintage","linkInternalFilters":"javascript:,guitarcenter.com","prop7":"[gc] vintage"} Jamplay is a large YouTube Channel featuring all levels of guitar lessons from the very basic, beginners’ guides to expert levels and, of course, some videos that dissect popular songs or styles down to the last finger and fret. It has big range of different players and “teachers”, so if you maybe find one guy a bit hard to understand or perhaps you don’t quite connect with his style, look around and you’ll soon find someone else. • Why size matters: Fret width and height affect playability considerably. Fret wire measures at .078 to .110 at the crown, or top, and runs between .035 and .055 high. Taller frets, at .45 and up, tend to make for easier string bending and produce clear notes without much pressure. The latter makes them ideal for high speed playing. The furthest point of that concept is the scalloped fretboard, employed most notably by Yngwie Malmsteen and John McLaughlin, who played a specially designed Gibson J-200 with scalloped frets and drone strings with the group Shakti. Description: 1997 Non Left Handed Model. Body: Laminated Maple - Body Construction: Semi-Hollow (Chambered) - Top Wood: Maple - Neck Attachment: Set - Neck Wood: Maple - Neck Construction: 3 Piece - Fingerboard: Rosewood - Inlay: Block - # of Strings: 6 - Scale Length: 24.75" (63cm) - Headstock: 3+3 - Bridge Construction: Rosewood - Cutaway: Double - Hardware: Gold, 2x Volume Control, 2x Tone Control, 3-Way Switch - Pickups: Humbucker - String Instrument Finish: Ebony, Natural, Vintage Sunburst It helps if you shop frequently but at my Guitar Center the tech is frequently going through guitars on the wall and setting them up so it's ready to be sold without the need for a setup. They have motivation to keep their guitars setup. I mean, have you ever went to a shop, picked up a guitar you wanted, and it had stupid high action? You're not gonna buy it until it's setup right? If they're setup, they'll play better and it'll be a lot easier to sell. mid-1939 Popscicle bracing on D body sizes. See the above picture for what the popsicle or T-6 or upper transverse graft brace is. The popsicle brace was added to the underside of the top of the guitar, below the fingerboard. The brace was added to help prevent top cracks alongside the fingerboard. Since the first D body size was made in about 1934, problems obviously came about and Martin added the brace by 1939. The brace does not appear in pre-1939 Martin D-sizes, but transitioned in around 1939, and is present in all 1940 and later D models. Without the popsicle brace, the top is attached only by the strength of the spruce fibers and a 1/2" x 2" glue area where the top overlays the soundhole #1 brace. With the popsicle brace there is an additional 1" x 2" glue surface directly under the fingerboard. Unfortunately the popsicle brace can deaden the sound of the upper bout area of the soundboard, and the popsicle brace doesn't always prevent the top from cracking along the fingerboard either. As people search for why the old Martins sound so good, they examine every aspect of them and the popsicle brace usually enters the conversation. Here's some data on popsicle braces: ### { "thumbImageID": "NTX500-Acoustic-Electric-Guitar-Brown-Sunburst/K47821000003000", "defaultDisplayName": "Yamaha NTX500 Acoustic-Electric Guitar", "styleThumbWidth": "60", "styleThumbHeight": "60", "styleOptions": [ { "name": "Brown Sunburst", "sku": "sku:site51500000152018", "price": "449.99", "regularPrice": "449.99", "msrpPrice": "499.00", "priceVisibility": "1", "skuUrl": "/Yamaha/NTX500-Acoustic-Electric-Guitar-Brown-Sunburst-1500000152018.gc", "skuImageId": "NTX500-Acoustic-Electric-Guitar-Brown-Sunburst/K47821000003000", "brandName": "Yamaha", "stickerDisplayText": "Top Seller", "stickerClass": "", "condition": "New", "priceDropPrice":"", "wasPrice": "", "priceDrop": "", "placeholder": "https://static.guitarcenter.com/img/cmn/c.gif", "assetPath": "https://media.guitarcenter.com/is/image/MMGS7/NTX500-Acoustic-Electric-Guitar-Brown-Sunburst/K47821000003000-00-60x60.jpg", "imgAlt": "" } , { "name": "Natural", "sku": "sku:site51500000152019", "price": "449.99", "regularPrice": "449.99", "msrpPrice": "499.00", "priceVisibility": "1", "skuUrl": "/Yamaha/NTX500-Acoustic-Electric-Guitar-Natural-1500000152019.gc", "skuImageId": "NTX500-Acoustic-Electric-Guitar-Natural/K47821000001000", "brandName": "Yamaha", "stickerDisplayText": "Top Seller", "stickerClass": "", "condition": "New", "priceDropPrice":"", "wasPrice": "", "priceDrop": "", "placeholder": "https://static.guitarcenter.com/img/cmn/c.gif", "assetPath": "https://media.guitarcenter.com/is/image/MMGS7/NTX500-Acoustic-Electric-Guitar-Natural/K47821000001000-00-60x60.jpg", "imgAlt": "" } , { "name": "Black", "sku": "sku:site51500000152017", "price": "449.99", "regularPrice": "449.99", "msrpPrice": "499.00", "priceVisibility": "1", "skuUrl": "/Yamaha/NTX500-Acoustic-Electric-Guitar-Black-1500000152017.gc", "skuImageId": "NTX500-Acoustic-Electric-Guitar-Black/K47821000002000", "brandName": "Yamaha", "stickerDisplayText": "Top Seller", "stickerClass": "", "condition": "New", "priceDropPrice":"", "wasPrice": "", "priceDrop": "", "placeholder": "https://static.guitarcenter.com/img/cmn/c.gif", "assetPath": "https://media.guitarcenter.com/is/image/MMGS7/NTX500-Acoustic-Electric-Guitar-Black/K47821000002000-00-60x60.jpg", "imgAlt": "" } ] } The theory of evolution says that the longer something has been evolving the more complex it tends to get, and this is certainly true of the electric guitar, which has been evolving for over half a century. Electric guitar sounds rely on the instrument itself, the amplifier through which it is played and also on the loudspeaker system used. Further variables are introduced when miking techniques are taken into consideration, though these days miking is only one of the ways of recording an electric guitar — we also have a number of effective DI techniques from which to choose. This is basically the same as having an entire studio’s worth of gear under your feet. You have 72 amp models to play with, painstakingly recreated from reference amps such as Vox Ac30 amps, Hiwatt Custom 100, Fender amps and more. There’s 194 effects to choose from ranging from distortion to modulation to delay, compression, wah – basically any effect you can think of! There’s also 37 cabinets that you can choose from which gives each amp model and effect a unique sound as well as 16 microphones which provide unique tonal qualities to your overall sound– we challenge you to get bored of this! I have a 12-string Lyle in really good shape. Bought as a b-day present from me back in late 1970's from a pawn shop. I need to take it to a luthier to have it set up better, but it sounds really good. Just right now the action is a little high, but not that much wear. For a long time, I put it away, finding it hard to chord as a relative newbie back then. But now, I am more than ready. It does say it was made in Korea ~ which makes it seem a little more rare. The sound is still great or maybe even better! Wish I knew what it was worth, but I hope the guy I find to set it up will know more. When two pickups are wired in series, a good portion of the treble frequencies is lost because the long pickup wire works like a resistor. Any resistor in the signal path will suppress the signal. The formula works like this: The longer the wire, the higher the resistance, and the more treble is lost. We all know this from guitar cables: When you use a very long guitar cable, the sound isn’t as detailed and transparent as it is with a shorter cable. A long cable acts as a resistor. We're just over the £300 price tag here, but for the shredders and jazz fans out there, the Schecter C-6 Plus in Electric Magenta is a great option, whether you’re a beginner or pro player. It’s one of our favourite cheap electric guitars, and one hell of a performer, featuring professional level appointments inspired by Schecter's Diamond Series guitars – this is a monster of tone, and well worth the extra investment. These pickups rely on electromagnetic induction to "pick up" the vibration of the strings. Basically, it emits a magnetic field and as the string vibrates through it this generates an electrical current, which is your audio signal. This information is then sent on to an amplifier. The reason why you need an amplifier is that the original signal from the guitar is not strong enough to be pushed through a loudspeaker without a boost from the amp. Their LP models have a "mahogany" body, and the binding is very thin so I think the maple top is just vaneer. I also noticed the headstocks were not the usual LP angle. In my opinion, Epiphone wins hand down. The Strat types don't look much better in my opinion, again, sharp fret ends and awful looking headstock and logo. If you see any of their relic jobs, you'll notice that they range from passable from a distance to hideous from any range. The main purpose of the bridge on a classical guitar is to transfer the vibration from the strings to the soundboard, which vibrates the air inside of the guitar, thereby amplifying the sound produced by the strings. The bridge holds the strings in place on the body. Also, the position of the saddle, usually a strip of bone or plastic that supports the strings off the bridge, determines the distance to the nut (at the top of the fingerboard). "This axe is no slave to the past, however, starting with Leo’s PTB™ (Passive Treble and Bass) system which functions on all three pickups for dramatically more variety than the vintage setup. What’s more, the Legacy features a Leo Fender-designed Dual-Fulcrum vibrato, a work of engineering art which allows bending up or down with unsurpassed stability, while offering a silky feel through its beefy aluminum vibrato arm. The Legacy’s hard-rock maple neck features an easy-playing satin finish, comfortable 9 1/2″ radius and Jescar 57110 medium-jumbo nickel-silver frets for silky playability. The moment you open the luxurious hardshell case, you’ll be greeted with a stunning instrument and delicious aroma that’ll have your pulse racing." Chorus: Though it can be overused, light distortion works well as a filler for choruses in Christian worship and most other genres.Verse: You won’t typically hear a distorted verse, though at times a two guitar group can make this work. Generally, you’ll want to leave distortion for the higher intensity portions of a song.Bridge: A lot of Christian songs tend to lower intensity during the bridge, which means light distortion becomes a little less usable. Though for bridges that keep the tempo up, it can work pretty well. I’m getting a bad hum that almost goes away when I turn the volume up completely….gets loud as I turn it down. Someone rewired the guitar with 2 pair wire…..they attached a ground to the vol and tone pots everywhere the wires went….and also the body of the switch. I think it’s a bad ground loop problem….I’m going to change everything to single strand wire. I’m guessing there’s a voltage difference somewhere and it gets close to normal when I turn it all the way up on the volume pot. Inspirational, motivational and light background tune with beautiful and atmospheric melody. Good production audio for the slideshow, presentation, youtube, advertising, business project, motivational and successful videos, inspiring moments, bright achievements, film scores. I used electric guitar, muted guitar, piano, staccato strings, bass, drums, Glock, bright pads. After years of analogue delay companies decided that it was not clean or accurate enough. So they came up with a much sturdier design with digital delay chips.Not only can these get the timing down perfectly every time but they can also cover a wider range of delay options. Depending on the chip inside you can easily get multiple seconds of delay time in a single pedal.The main downside to these is that they can sound a bit clinical and too clean. Manufacturers have battled this by adding in different modulation options on delays like this to give it more character. If you want a delay for every possible scenario digital might just be the way to go. "I have purchased 15 personalized guitars from the top guitar custom shop. All the guitars have met or exceeded my expectations. Great workmanship and quality work. An exceptional group of people to work with. They are ready to answer your questions or concerns. The one time i had a concern about a guitar they responded immediately and handled the situation more than i expected. I highly recommend this company!" Dr E C Fulcher Jr - Abingdon, Maryland USA. Most of the guitars, banjos and mandolins my customers use and collect have been made by major manufacturers such as Martin, Gibson and Fender or a few superb handcrafters such as D'Angelico and Stromberg, but over the years, by far the greatest number of instruments purchased in the USA and worldwide have been lower-priced student models. Prior to 1970 most student grade instruments sold in the USA were made here by companies such as Kay, Harmony and Regal in Chicago or Oscar Schmidt of Jersey City, New Jersey, and Danelectro of Neptune, New Jersey. When I started out playing and collecting guitars in the mid 1960s, brands such as Harmony, Kay, Stella, Silvertone and Danelectro were the standards for student use. We saw very few Oriental imports. It’s probably fair to say that drive pedals of all shapes and sizes outnumber the other types of effects. This is due to the fact that they form the backbone of your overall tone. It’s also probably fair to say that it’s one of the most subjective tonal changes you can implement. One man’s muff is another man’s screamer, so to speak. There are certain classics within the genre which may act as a gateway to stronger forms of grit though. Ibanez’ famous green Tubescreamer pedal is used by countless players on account of its versatility, whereby it can form the basis of a good quality blues tone. Or it can complement a distortion pedal by ‘boosting’ or tightening up the signal. Another favourite is the Electro Harmonix Big Muff, which has been used for decades by players looking to add a distinct fuzziness to their tone. While larger frets do seem to result in a rounder tone, perhaps with increased sustain too, they also yield a somewhat less precise note than narrower frets—at least, as examined “under the microscope.” Unless it is very precisely shaped, and frequently dressed, the broad crown of that jumbo fret can “blur” your note ever so slightly, which might even be part of the sonic appeal for some players—the way, for example, a tweed Deluxe is a little blurrier or hairier at most volume settings than a blackface Deluxe. Be aware, however, that the phenomenon can work against some sonic goals too. Typically, players tend to place their delay and reverb effects within the effects loops of their amplifiers. This placement is especially helpful if you get your overdrive and distortion from your amplifier instead of pedals. Otherwise you would be feeding your delay repeats and reverb ambiance into the overdrive and distortion of your amplifier, which can sound muddy and washed out. You can also place your modulation pedals within the effects loop of your amplifier as well for a different sound. New Mooer Red Truck Multi Effect Pedal. Mooer Red Truck. The Mooer Red Truck is one of the most full-featured effects strip on the market. Featuring several effects modules within one unit, this is designed for players who prefer the simplicity of single effects over multi-effects and want a portable solution for rehearsals, gigs, or where carrying a lot of gear is an issue. Guild is the most underrated American premium guitar brand. Almost as good as a Martin & way better than most Gibsons, Guilds are typified by clear, crisp, even tone. While lacking the full bass & tinkly top end of a Martin, the evenness of tone is a selling point for many artists, along with the clarity. The maple models are especially bright & brassy in tone, making Guild a popular brand among rock stars in the 70s, their heydey, when some of the finest American guitars came out of their West Waverly Rhode Island plant. Top-end Guild acoustics are graced with an ebony fretboard more typically found on jazz models, slightly curved and beautifully inlaid with abalone fret markers. The Guild jumbo 12-string has been an especially prized instrument, and was for many years considered the best mass produced American 12 string available. Guitar amp modeling devices and software can reproduce various guitar-specific distortion qualities that are associated with a range of popular "stomp box" pedals and amplifiers. Amp modeling devices typically use digital signal processing to recreate the sound of plugging into analogue pedals and overdriven valve amplifiers. The most sophisticated devices allow the user to customize the simulated results of using different preamp, power-tube, speaker distortion, speaker cabinet, and microphone placement combinations. For example, a guitarist using a small amp modeling pedal could simulate the sound of plugging their electric guitar into a heavy vintage valve amplifier and a stack of 8 X 10" speaker cabinets. Description: Body: Basswood (Tilia, Linden, Lime) - Body Construction: Solid - Neck Wood: Maple & Walnut - Fingerboard: Maple - Frets: Jumbo - Inlay: Black - Dot - # of Strings: 6 - Scale Length: 25.5" (65cm) - Headstock: 6 In-Line, Reverse - Bridge: Lo-Pro Edge - Hardware: 1x Volume Control, 1x Tone Control, 5-Way Switch - Pickups: DiMarzio - Pickup Configuration: H-S-H - String Instrument Finish: White Ibanez is a Japanese guitar brand, which is established in the year 1957. They provide Acoustic, Bass Guitars and Semi-Acoustic Guitars at different price segments. The company is owned by Hoshino Gakki. Their headquarters located in Nagoya, Aichi, Japan. They also manufacture amplifiers, mandolins and effect units. They become one of the top ten best guitar brands in India. The price range starts from Rs. 13,299/- onwards (approx). For further details, visit Ibanez.com. The playing of (3-5 string) guitar chords is simplified by the class of alternative tunings called regular tunings, in which the musical intervals are the same for each pair of consecutive strings. Regular tunings include major-thirds tuning, all-fourths, and all-fifths tunings. For each regular tuning, chord patterns may be diagonally shifted down the fretboard, a property that simplifies beginners' learning of chords and that simplifies advanced players' improvisation. On the other hand, in regular tunings 6-string chords (in the keys of C, G, and D) are more difficult to play. There aren’t that many entry-level to mid-priced electric guitars that can meet the demands of heavy use and/or meet the standards of professional musicians, which makes the PRS SE Standard 24 pretty special. Its tag price is friendly enough for beginners and intermediate players yet it’s packed with features that make it a favorite among pro-level guitarists. ###### Budget acoustics usually have a very high action (which may be possible for a good luthier to fix!), barre chords on acoustic guitar can be demanding and require good finger strength on a well set up guitar, on a budget thing with a high action it will be next to impossible! Cheaper acoustic guitars can be very hard to play higher up the fretboard because the strings are too far from the fretboard - if you find this, the truss rod (a thing inside the neck that controls how 'level' the neck is) can be adjusted by someone who knows what they're doing! If you can stretch to a mid-priced acoustic you should be able to get something suitable for a beginner. The company initially manufactured only traditional folk instruments,[citation needed] but eventually grew to make a wide variety of stringed instruments, including violins, cellos, banjos, upright basses—and a variety of different types of guitars, including classical guitars, lap steel guitars, semi-acoustic guitars, and solid body electrics. Some of Kay's lower-grade instruments were marketed under the Knox and Kent brand names. I doubt I can bring anything relevant to this discussion that hasn't been said already but since I liked the article so much and the subject has puzzled me since I got my first guitar, I jsut have to pitch in. My first guitar was a cheap Jackson-esque strat the brand was Cyclone. It was significantly lighter in weight than my friends Fender stratocaster and I liked it for that reason from the beginning. It was just much easier and more comfortable to play, esepecially while standing. Maybe because of this I've been biased to doubt the whole tonewood thing. My experience is that most 'guitar people' (at least here in Finland) seem to think that lighter wood is simply a sign of a bad quality electric guitar. I talked about this quite recently with a local luthier, who is very sience oriented and uses rosewood as the body. Guitars he makes are so light that when you pick them up at first, it is hard to believe they aren't hollow. So I asked him about his thoughts on the density and / or other qualities of the wood affecting the tone and his responce was pretty much consistent with the article. Anyhow he did mention the _theoretial_ possibility of the waves to traveling to the wood and reflecting back to the strings _possibly_ affecting the sustain. As someone stated, in real life physics there are never completely isolated phenomena but you can draw a line whether a factor is significant or not. John's comment above would support the more dense wood to be better but my guess is that when it comes to the sound that is audible to human ear, the material does not count. How a guitar feels is a totally different matter and shapes the way the player hears the sound drastically. My intuition says that lighter wood might convey the vibration to the players body which would partly exlpain Butch's experience with guitars with different materials. I've never thought about that before but do find anything else than the strings resonating (springs, screws..) uncomfprtable. Although the G&L Legacy electric guitar was released one year after the passing of Leo Fender, it is designed to the specifications of the original Stratocaster but with a few modern features specific to G&L instruments. The Legacy included, for instance, G&L's Dual Fulcrum vibrato and Schaller tuners and was available in a combination of different tonewoods. Even if the G&L guitars from before Leo Fender's death are more collectable, the Legacy electric guitar is still considered a high-quality instrument. In ’71, Univox introduced what are arguably their coolest-looking amplifiers, the B Group, covered in nifty two-tone blue vinyl. Remember, this was the tail end of the heyday of Kustom, with its colored tuck-and-roll amps, and the two-tone blue with a red-and-white oval logo was boss. The lettering was the same uppercase blocks as on the outline logo. These new Univox amps were hybrids, with solidstate power supplies and lots of tubes – lots! The Univox B Group had two combo and two piggyback guitar amps, two piggyback bass amps and a piggyback PA. It is not known how these were constructed, but because previous amps had Japanese chassis put into Westbury-made cabinets, these were probably built that way also. Understandably, the Blackstar ID:Core Stereo 20 V2's main selling point is its versatility, and this is reflected in the reviews. Sound quality also got a lot of thumbs up, with many describing the amp as full sounding, thanks to its stereo speaker configuration. For something this versatile, the amp's ease of use also gets commended quite often, with some finding it easy to dial in different sounds. Finally, a good number of users find the amp's overall build quality to be solid and reliable. ###### We’ve already shown you how you will sometimes want more than one mic on your amp to achieve ideal sound in your tracks. Many semi-distant and ambient techniques will be most useful, along with a close mic, but on a separate track, to retain the option of blending a more-direct tone to create your overall sonic picture. Any single-mic positions discussed thus far can be combined into multi-mic sounds in the mix when recorded to different tracks. There are also several other approaches to multi-miking that might come in handy now and then, and which are worth some exploration. The sound blew away guitarists when units first popped up in guitar stores. If the dizzying harmonic swirl didn’t just make you puke, it really sent you tripping. Interestingly, many tired of it a lot quicker than they did the phaser’s subtler, less imposing “swoosh”, and consequently it’s difficult today to name a fraction as many great guitar tracks with flangers slapped all over them as with phasers. For the latter, we’ve got the Stones’s “Shattered” (or just about anything from Some Girls), the Clash’s “Lost in the Supermarket” from London Calling and loads from Sandinista, and heavier rockers from early Van Halen to recent Foo Fighters. In the flanging corner, we’ve got The Pat Travers Band’s “Boom Boom, Out Go The Lights” and… well, I’m sure there’s another somewhere. Okay, maybe the intro lick to Heart’s “Barracuda” redeems it some. There are many variations on the solid-body guitar theme. Companies such as Ibanez, Yamaha, PRS, Jackson and many others make solid-bodied guitars. Generally you get what you pay for, but provided you avoid the very cheapest models, and stick with reputable brands (such as those mentioned above) you can spend a relatively small amount of money and get a guitar that will last you long into your playing career. Over the years, many guitarists have made the Telecaster their signature instrument. In the early days, country session musicians were drawn to this instrument designed for the “working musician”. These included The King of the Tele Roy Buchanan, Buck Owens, Guthrie Thomas, Waylon Jennings, James Burton who played with Ricky Nelson, Elvis Presley, andMerle Haggard (a Signature Telecaster model player himself). Burton’s favorite guitar was his Pink Paisley (or Paisley Red[5]) model Telecaster. Later, Danny Gatton blended diverse musical styles (including blues, rockabilly and bebop) and became known as the “telemaster”. Eric Clapton used a Telecaster during his stint with The Yardbirds, and also played a custom Telecaster fitted with Brownie‘s neck while with Blind Faith. Roy Buchanan and Albert Collins proved the Telecaster equally suited for playing the blues. Muddy Waters also consistently used the Telecaster and Mike Bloomfield also used the guitar on his earlier works. Soul sessionist Steve Cropper used a Tele with Booker T. and the M.G.’s, Sam and Dave, Otis Redding and countless other soul and blues acts. The separation between Briefel and Unicord must not have been entirely unamicable, probably more a matter of direction than anything else. In any case, in 1978, following the demise of the Univox brand (when the Westbury brand was debuted) three Westbury Baroque acoustics were offered, all made by Giannini. These included one “folk” dreadnought with a tapered Westbury head, the stylized “W” Westbury logo, block inlays and a very Martin-esque pickguard. The “classic” was our old friend, the CraViola, with a new head shape. The 12-string was another CraViola. These probably only lasted a year or so; in any case, the Westbury name was dead by 1981. In the vintage setup, the pickup is wired to the pot lug alone, with the tone control capacitor being attached to the output side. This tends to allow the volume to be rolled off without losing too much high end. This is great for those who play clean rhythm by just lowering their guitar volume as opposed to switching amp channels or turning off a boost pedal. It’s old-school, and it works. The downside is that the tone control sometimes has to be rotated a bit more before its effects are heard. The focus has always been to start with sound and top it off with a bold, boutique-inspired appearance. When Michael Kelly launched, we, in fact, only offered mandolins and acoustic basses. These two markets had been under served and consumers could not buy a great sounding instrument without breaking the bank. The Michael Kelly Dragonfly collection of both acoustic basses and mandolins quickly became popular and hard to get. Musicians were drawn to their decidedly custom appearance and then fell in love with their sound and performance. I borrowed the above quote from an article on effects pedals by Robert Keeley (a maker of seriously fine effects pedals) which can help you remember the order to place your pedals. I have a few slight modifications and additions to this that I use, but this is a great way to remember the rough order quickly, and it comes from one of the great pedal masters. However, even for recording experts who can discern if something was done at Columbia Records Studio A or Olympic or wherever, it’s challenging to define a percentage of influence that the studio provides. “I don’t know that you can measure it in any way. It’s really more an ineffable quality of sound and aesthetics,” Horning Schmidt says. “You can measure frequency response and you can measure decibels but in my research I’ve found that back in the thirties and forties, you had engineers saying ‘you can’t just go by the meters. You have to use your ears.’” Mr. White is an incredibly underrated guitarist. His singles (From the White Stripes) always span with just three to four chords and his simplistic blues rhythm and picking styles have him overlooked most of the time. However, his masterful use of the Digitech Whammy and is erratic playing make for some of the most memorable guitar solos ever. Check out Ball and a Biscuit and try not to like that solo. One of my favorite Jack White moments was during the 2004 Grammys, where he took 7 Nation Army and went into a cover of Son House’s Death Letter (another artist who I had to unwillingly cut out of the list). In an awards show celebrating Justin Timberlake and Missy Eliot, Jack White took time to give a salute to where things got started, to an artist born a century ago. Also apparently still in the line in early ’64 was the SD-4L, which had adopted four of the two-tone, metal-covered pickups found on the SS-4L guitar. This still had the old, elongated Strat head. It also had the platform vibrato system found on the previous SS guitars. The SD-4L probably didn’t make it into ’65, but the shape was taken over by the more conventional TG-64. Looking at my beautiful but dusty Les Paul sitting in the corner, I walked over to my bookshelf to choose a book to once again work on my electric. Now, I will say that I am NOT shy about purchasing a book or many, many books if I want to learn something so there was quite a selection to choose from. I had a few books that focused on the electric guitar but for the most part they were incomprehensible or started you off with basic chords and strumming, then turn the page and WHAM! it was Eddie Van Halen time. Just no real steady work up in skills and a lot of confusing jargon. Which is probably why I set the electric aside. The acoustic guitar lends itself to a variety of tasks and roles. Its portability and ease of use make it the ideal songwriter's tool. Its gentle harp-like arpeggios and rhythmic chordal strumming has always found favor in an ensemble. The acoustic guitar has a personal and intimate quality that is suited to small halls, churches and private spaces. For larger venues some form of amplification is required. An acoustic guitar can be amplified by placing a microphone in front of the sound hole or by installing a pickup. There are many entry-level acoustic guitar models that are manufactured to a high standard and these are entirely suitable as a first guitar for a beginner. OCTAVACIÓN (FAT20) Cada una de las silletas está provista de un tornillo de bloqueo para impedir todo movimiento. Para ajustar la octavación, afloje el tornillo de bloqueo de la silleta con una llave Allen de 2 mm. (D) Para ajustar la octavación, introduzca una llave Allen de 2,5 mm en el tornillo de la silleta situado en la parte posterior del trémolo. ## I started out doing pretty much what I do now on an acoustic and transferred it to electric when I was able to get a paper route and buy a crappy red electric guitar. I knew the value of working stripped down and I still do, although in this day and age I've made a lot of records with different sounds. I must say I really love what technology can afford you. Regardless of what side anyone is on, when it comes to the tonewood debate, tonewood's relevancy is just a small part of a bigger discussion. Simply talking about guitar, sparks interest in guitar. This is and will always be a good thing. Any pursuit that expands one's creative and mental abilities can be regarded, in most cases, as a grand and noble thing. So, in arguing about tone wood, it's fanatical sides raging against each other, interest in the instrument they're picking apart will inevitably grow. If you’re using a bunch of high gain pedals, or a lot of pedals chained together, chances are you’ll get a little bit of hum or unwanted buzzing coming from your amp. This is especially noticeable if you’re using high gain amps and guitars. If your amp is buzzing when you’re not playing anything, you might benefit from a noise gate pedal as they cut out all that unwanted noise but preserve your tone. First of all build quality. CTS's sturdy casings, brass shafts and contact patch, solid connections are second to none, and importantly are precision made by a company who have been doing so for a long long time. Fitting a well made pot will mean you'll likely only need to do it once in that guitar, that's important I feel! There are however a lot of different variations of CTS pot, and that is why I now only swear by the 450 Series, and 'TVT' Series, both are constructed with the same components, I like consistency here! Which is why you'll only find these models of pot in my harnesses. I've seen some lower quality series' of CTS pots that have been wildly inconsistent, which I'll get to next.. Yes, the Les Paul is a signature model for the late, great guitarist Les Paul. This signature instrument is one of the few models to ever have other famous players have signature versions of their own. The impact of the Les Paul has made it one of the most recognized instruments on the planet, due to its amazing versatility and high quality of craftsmanship. Basically, Power Soaks are in-line devices that attenuate the signal from a full-out, saturated tube amplifier, preserving the tone and sustain while vastly reducing the bone-crushing volume. That signal flows from the attenuator to a speaker cabinet, which is then miked, reproducing the sound at a very manageable volume level. A Power Soak is like a second master volume control, absorbing the full power of the amp and converting that power into heat (these units get very hot!) while passing only a small portion of that power to the speaker. While there is an inherent loss of the natural non-linear speaker distortion associated with screaming guitar amps, and the pleasing sizzle and cabinet "thump" that results, the trade-off is obvious. Initially inspired by his older brother Jimmie, Stevie picked up the guitar at an early age and was playing in bands by the time he was 12. By the time he formed his legendary trio Double Trouble in 1980, Stevie Ray Vaughan was already a legend in his adopted hometown of Austin, Texas. After hearing and seeing Vaughan playing at Switzerland’s Montreux Jazz Festival, pop icon David Bowie invited Stevie to play on his Let’s Dance album. Vaughan’s career took off form there. Bought a Tubemeister 18 Twelve about three years ago. Love the size, and options of this amp. Primarily use it at home. Replaced a Fender and a Marshall combo amps with this one. Really like the sound, but recently blew a power tube as well as a fuse, and capacitor as a result. Replaced the Chinese power tubes with JJ's, when professionally repaired. All seems well, but wonder about the reliability of this amp in the long run. My tech recommended changing the tubes every year or two, especially if using the power soak feature. Running it really hard by doing so. Also, read that it generates more heat inside the cab (no vents. closed back). Overall still like the amp, but after dropping$200 to repair, after spending $800 on the amp.. having second thoughts of long term reliability. Not using the power soak very much any longer, and keeping a better eye on the TSC (tube safety control)... not sure if it actually did what it is suppose to. To me... less features and simplicity could be ...more Two ways. The most important is: practice. But the other way is technique. Proper fingering. Some chords have multiple ways they can be fingered, and you always want to pick the easiest. Now, some fingerings may not *seem* the easiest, just because they aren't the ones you already know, but in the long run, they are worth learning because they really do make things easier. In particular, most people play an open A chord the wrong way, but the proper fingering makes it easier.. The essense of fingering is laziness: you want to move your hand and fingers as little as possible. So in particular, if you have a finger down in one chord that's already in the right place for the next chord, you want to just *leave* it there. Don't pick it up, only to place it back down in the same place. And if you can use a fingering that *let's* you just leave it there, then that's clearly the choice!. So let's look at the open A chord. Most people play it with their 1st finger on the 4th string, 2nd finger on the 3rd string, and 3rd finger on the 2nd string, three-in-a-row. But that's a weak fingering (however popular it is). The better fingering is like this: 1st finger on the *3rd* string, 2nd finger on the 4th string, 3rd finger on the 2nd string. It may *look* a little awkward, and feel awkward until you learn it. But it really is the better fingering.. Why? Because consider the context of an A chord. What chords are you most likely to want to go to from an A? The biggest answer would probably be D. Well, notice, if you finger the A chord as I recommend, your first finger is now already in the right place for the D chord, and can just be left there! You only have to move two fingers, instead of all three, to switch between the two. This lets you do it faster and smoother. The other chord you'd be likely to want to go to from an A would be an E, and, while we don't have any fingers exactly in the right place, we at least already have the 1st finger on the 3rd string, like we want it for an E; we just have to slide it back one fret. This is still easier than entirely re-arranging all three fingers. Finally, more rarely, you might want to go between A and Amaj7. For instance, the old Beatles song "Mother Nature's Son" uses the sequence A Amaj7 A7. This is perfect for this fingering! You just slide your first finger back one fret to make the Amaj7, then take it off entirely to do the A7.. Similarly, a G chord normally be fingered using your 2nd, 3rd, and 4th fingers, instead of your 1st, 2nd, and 3rd. This makes it much easier to go to C, the most likely chord for you to be going to.. But no fingering rule is absolute, it's always contextual. If you have a song which requires you to move to something more unusual, and a different fingering would make that particular move easier, then use the different fingering. For instance, if I had something which required that I add an A note to the top of my G chord, then I might well use the common 1-2-3 fingering for the G chord, so that I'd leave my pinkie free to reach the A note. In the late 1950s, various guitars in the Kay line were assigned new model numbers; according to the 1959 catalog, the Thin Twin became K5910 and the Electronic Bass became K5965.[18] Both instruments remained in Kay's catalog offerings with only minor cosmetic variations until 1966, when Kay revamped its entire guitar line to only feature budget instruments. Kay also manufactured versions of the Thin Twin guitar under the Silvertone (Sears) and Old Kraftsman (Spiegel) brands. To tell you the truth, in the first few years i started playing, after i learned the use of the switch, I was approach by a man that also a guitarist and asked, "YOur guitar sounds good, I believe it so expensive". Well, I bought the guitar only for$150 dollars, But, I toggled the switch to the right pickup on the right time, makes my guitar sounds like an expensive guitar. The two common guitar amplifier configurations are: a combination ("combo") amplifier that includes an amplifier and one or more speakers in a single cabinet, and a standalone amplifier (often called a "head" or "amp head"), which passes the amplified signal via a speaker cable to one or more external speaker cabinets. A wide range of speaker configurations are available in guitar cabinets—from cabinets with a single speaker (e.g., 1×10" or 1×12") or multiple speakers (e.g., 2×10", 4×10" or 8x10"). Guitar straps may be small, but they play a big role in your performance and comfort level during gigs or practice sessions. A top quality strap keeps your axe securely in place while you're shredding on stage, and reduces stress on the arm and shoulder. More than simply functional, guitar straps add a decorative look to your stage presence to complement your own personal vibe. To that end, El Dorado offers a variety of stylish, durable guitar straps to add to your accessory collection, allowing you to spend less time wrangling straps and more time focusing on the more important task of making awesome music. THE CONTROL CAVITY Routing the control cavity is just as important as the neck pocket but with a couple more steps. The best thing to do is to cut out the plastic cover. Trace the pattern that you came up with for it on the plastic then cut it out with a jig saw. Use a fine tooth blade to prevent the plastic from chipping and will also yeild a smoother cut. Once this is done, take your template and reverse it, trace the patern on the back side of the body. Next set your router to a depth that is the same as the thickness of the plastic plate and rout the cavity working out to the line you drew. I do this free hand since the first cut is too shallow for a template. Be careful when you do this and test fit the plate you cut to make sure you get a goo fit. Then you will draw another line about 1/4" along the inside of the cavity you routed out, leaving extra room in areas for the screws you will use later on to secure the control plate. Rout this area out in the same way, working out to the line you drew. When you start to get close to the half way point in the wood start to think about how much wood you need to leave at the bottom. Usualy 1/4" is good but make sure you are careful! I miscalculated once and ended up going all the way through the body. Bad experience. Description: Body: Maple - Flamed - Body Construction: Semi-Hollow (Chambered) - Neck Wood: Maple - Fingerboard: Rosewood - Frets: 20 - Inlay: Block - # of Strings: 6 - Scale Length: 25.5" (65cm) - Headstock: 3+3 - Bridge: Tune-O-Matic - Bridge Construction: Rosewood - Cutaway: Single - Hardware: Chrome, Diecast, 2x Volume Control, 2x Tone Control, 3-Way Switch - Pickups: Humbucker - Pickup Configuration: H-H - String Instrument Finish: Amber, Red The use of overdrive distortion as a musical effect probably originated with electric guitar amplifiers, where the less pleasant upper harmonics created by overdriving the amp are filtered out by the limited frequency response of the speaker. If you use a distortion plug-in without following it up with low-pass filtering (or a speaker simulator) in this way, you may hear a lot of raspy high-end that isn't musically useful. This is why electric guitar DI'd via a fuzz box or distortion pedal sounds thin and buzzy unless further processed to remove these high frequencies. The Ovation Guitar Company founded by Charles Kaman based in the USA. The company primarily manufactures nylon-string acoustic guitars & steel-string acoustic guitars.These are the kind of electric guitars that are ideal for a recording in a studio and also great for stage performance. Their design incorporates, a wood top with a rounded, synthetic bowl shape instead of the traditional back and sides. CE44-RR is one of the most popular series of acoustic guitars produced by this brand. This is an expensive brand of guitars whose starting price is 31,555 INR approximately. Reverb works well for acoustic guitars because it's a less intrusive effect that doesn't overtake the clean signal. Echo and delay pedals can be more difficult to tame from a feedback perspective, especially when the echoing trail gets too long. With reverb, you can have a thick effected layer with a relatively short trail behind it, especially with the HOF's short/long switch. ###### You asked, and you shall receive, Sonicbids blog readers. Per multiple requests, here's my guide to, "When the hell do I start turning these knobs, and where do they go?" But before we begin, I offer you the fine print: These references are general ideas for where to begin to look for sonic issues with particular sounds, instruments, and voices. I'm not going to tell you "always notch this 9 dB here and add 3 dB here with a wide boost and, voila, perfect sound!" because it's unfortunately just not that simple. So before you message me, "Aaron, I notched out so much 250 Hz out of my snare, I snapped the knob off the console, and it still sounds muddy!" just know that not all sound sources are created equal. The 2nd basic beginner guitar chord you should learn is C, or C major. You don’t have to say “major” in the name of the chord. If you just say C chord it’s assumed that it’s a major chord. You only want to strum the top 5 strings (that means the highest sounding 5 strings, not their relationship to the floor) The X in the guitar chord chart means not to play that string, or to mute it. SOLD OUT: Here is another fine example of a Professional high quality Japan Crafted guitars...this one is "cross-braced" and is a Dreadnought style acoustic like a martin type exhibiting superior solid construction as well as the very high grade Mahogany body Top - Sides & Back which appears to be all-solid. The necks fretboard is a wonderful Indian Rosewood. This example is believed to be a Vintage 1986 Model. Serial # 86021355. The sound is rich and expressive and very tonefull as would be expected from a quality built instrument. The playability "action" is great EZ to play and this guitar stays in tune very well to with its quality Original Takamine sealed Chrome tuners "grover type" This guitar is professional grade and will serve you well. This guitar is not a new guitar and IS a real VINTAGE guitar and has mellowed well and its condition is rated a solid 8.5/10 very good-excellent with some natural wear -dings-scratches etc.. Overall appearance is Gorgeous! and is sure to please. SOLD . The slide part on that track was quite difficult to simulate, but again, the guy I have playing in my band, that I've been playing with for a while, can do it, and he and my son are the only two guys I know that play it right. Recently, I had Ronnie Wood playing with me, and he did a good job with it. I think if you have your head on it, it can be done. Some areas of the top’s lacquer finish have been peeled away from the long-ago removal of a few stickers and black electrical tape (the previous owner admitted to decorating the guitar with the black stripes in a tiger-theme). The guitar plays well, with a good neck angle and decent original frets. The guitar was just set up this past month by the pros at the renowned Guitar Factory in Orlando(http://www.guitarfactory.us/). It now plays great and needs nothing – they do great work! Pickups read 4.12 (neck) and 4.20 (bridge), and pots and switches work well. And, very important to note on vintage Gretsch guitars, there is NO binding rot. Also includes the Original Hard Shell Case. First Act is a very peculiar guitar company. They have guitars that sell at Toys R Us that will literally fall apart in your hands. They sell pedals that are a complete joke, leaving you with the impression that they must be a bad, bad joke. Then something strange happened, I did a little research and found some info that was stunning. First Act has a couple of guitar lines that are some of the finest guitars I have ever seen, heard, or even read about. They have guitars that go for $3000 plus and are better guitars than any person commenting on this board will ever have the opportunity of even being in the same room with (including myself) Who would have thought?! Go figure. Frets are the metal strips (usually nickel alloy or stainless steel) embedded along the fingerboard and placed at points that divide the length of string mathematically. The strings' vibrating length is determined when the strings are pressed down behind the frets. Each fret produces a different pitch and each pitch spaced a half-step apart on the 12 tone scale. The ratio of the widths of two consecutive frets is the twelfth root of two ( {\displaystyle {\sqrt[{12}]{2}}} ), whose numeric value is about 1.059463. The twelfth fret divides the string in two exact halves and the 24th fret (if present) divides the string in half yet again. Every twelve frets represents one octave. This arrangement of frets results in equal tempered tuning. This is a rare bird. Its a early ibanez maxitone 994. It has a huge neck but plays pretty great! It has that classic MIJ tone. I can include a new Gator case for$50 extra! The neck and frets are good! The electronics are a little dirty. Ill clean them the best i can, but i thoughtit worth mentioning. It is functioning as it should be just a little dirty! # The first question you should ask yourself is: What type of music genre do I like that uses guitars? If you’re into metal, hard rock, or even alternative rock, selecting either one of those options is going to have an impact on the type of electric guitar you’ll buy in addition to the amp. Remember that one type of electric guitar and amp is going to work better or worse than another depending on the type of sound you want. ##### Washburn started in Chicago in 1883. They manufactured guitars and various other string instruments. Now they’re a division of the US Music Corp and owned by JAM Industries USA, but they continue to produce quality guitars. In the beginning, they mostly focused on banjos and mandolins. Starting in the 80s, they branched off into producing signature guitars. Now days they make a wide variety of instruments and are very beginner friendly. Washburns are made from fine-quality wood. This means they can get pricey. But the quality the solid wood offers is well worth the price increases. They’re a decent american company that make very consistent instruments. I have a Hohner DC. It is either a MIC or MIK. It does not have body, or neck bindings, but in every other respect is very nice. It was one of the first guitars I got when I started paly about 5 years ago. As a matter of fact, I had not played it for over a year - I recently got it out of the case, re-strung it and played it regularly for a couple weeks. I have been going over my colection looking for things I could sell off, but I decided to keep this one. #### If the Schecter wasn’t quite fast enough, this lower priced version of Steve Vai’s signature guitar should get the job done. The Wizard III neck is a direct copy from its more-expensive variation, and when combined with the 24 jumbo frets, creates a speed machine. Because Vai himself is a versatile guitarist, though, this guitar can pretty much do it all, though if you like a chunky neck for chords, you’ll have to look elsewhere. You even get the Tree of Life inlay at twice price, which looks great. In music, a guitar chord is a set of notes played on a guitar. A chord's notes are often played simultaneously, but they can be played sequentially in an arpeggio. The implementation of guitar chords depends on the guitar tuning. Most guitars used in popular music have six strings with the "standard" tuning of the Spanish classical-guitar, namely E-A-D-G-B-E' (from the lowest pitched string to the highest); in standard tuning, the intervals present among adjacent strings are perfect fourths except for the major third (G,B). Standard tuning requires four chord-shapes for the major triads. They say good things come in small packages. Well, "they" weren't wrong! The Orange Micro Terror Guitar Amplifier Head is no bigger than a lunchbox, but packs enough power to stand up to some of the bigger amplifiers out there, especially when you connect it to a 2x12 or even a 4x12 cab. It features a combination of solid state and valve technology and throws out 20w of pure power thanks to the 1 x 12AX7/ECC83 pre amp valve. Easy to use, affordable and even easier to carry around, you can easily gig with this or use it as a practice amp at home when coupled with the custom built Orange PPC108 1x8 Closed Back Speaker Cabinet. The 2nd basic beginner guitar chord you should learn is C, or C major. You don’t have to say “major” in the name of the chord. If you just say C chord it’s assumed that it’s a major chord. You only want to strum the top 5 strings (that means the highest sounding 5 strings, not their relationship to the floor) The X in the guitar chord chart means not to play that string, or to mute it. You can get a rough idea of what the All-Electric looked like in Gruhn/Carter’s Electric Guitars (Miller Freeman Books, 1995), although this example has been refinished and replated, with a new fingerboard, tuners and added tailpiece, and is an atypical 14-fret Spanish model, possibly assembled at the end of the ’30s from leftover parts. Toward the end of National Dobro’s presence in Los Angeles, a great many guitars were assembled and shipped from remaining stock, often as exports. Jump up ^ DeCurtis, Anthony (1992). Present Tense: Rock & Roll and Culture (4. print. ed.). Durham, N.C.: Duke University Press. ISBN 0822312654. His first venture, the Phillips label, issued only one known release, and it was one of the loudest, most overdriven, and distorted guitar stomps ever recorded, "Boogie in the Park" by Memphis one-man-band Joe Hill Louis, who cranked his guitar while sitting and banging at a rudimentary drum kit. # By the early ’80s, MTI was importing Westone guitars from Matsumoku, which had made its earlier Univox guitars (and the competitive Westbury guitars offered by Unicord). Wes-tone guitars continued to be distributed by MTI until ’84, when St. Louis Music, now a partner in the Matsumoku operation, took over the brand name and phased out its older Electra brand (also made by Matsumoku) in favor of Electra-Westone and then Westone. But that’s another story… Ibanez is the pioneer to launch the first 7-stringed guitar. They are the creators of the 7-stringed instrument in 1990 with the collaboration universe. Most Ibanez guitars come with a full-size frigate shape having a top of laminated select dapper. Ibanez guitars have mahogany neck, back, and sides, along with 20 frets on a rosewood fretboard. The Ibanez-branded headstock came with attractive quality and closed chrome die-cast tuners. These all features make the Ibanez guitars suitable for every kind of style and genre of music. However, heavy music is mainly the field where metal crowd flock and let the Ibanez guitars unbeatable. Some delay pedals also come with full looping abilities, allowing you to play detailed multi-part melodies completely by yourself. A few artists to look to for great examples of delay pedal use are Angels and Airwaves, U2 and Muse. Reverb pedals are an entirely different animal. It brings its own unique type of sustain to a note, infusing the sound with strong texture and character through its distinctive echo. Creating a sound not quite like any other effect, reverb calls to mind the energetic surfer rock of the 1960s, such as Dick Dale's version of "Misirlou." You can stay true to those vintage roots or take the effect in a new, modern direction—it's up to you. With the added dimension they bring to your tone, you'll want to use your delay and reverb effects pedals at every performance. They make a unique contribution to the sound individually and even more so when you use them as a team. Sometimes called an auto-volume, these pedals work the same as the wah-wah pedal.  The effect functions based on your picking dynamics, but instead of a change in tone, you get a change in volume.  The effect will have no volume when you pick, but will then swell up to audible levels.  It masks your pick attack and simulates the sound of a bowed instrument. {"eVar4":"shop: guitars","eVar5":"shop: guitars: electric guitars","pageName":"[gc] shop: guitars: electric guitars","reportSuiteIds":"guitarcenterprod","eVar3":"shop","prop2":"[gc] shop: guitars: electric guitars","prop1":"[gc] shop: guitars","evar51":"default: united states","prop10":"performance level","prop11":"intermediate","prop5":"[gc] shop: guitars: electric guitars","prop6":"[gc] shop: guitars: electric guitars","prop3":"[gc] shop: guitars: electric guitars","prop4":"[gc] shop: guitars: electric guitars","channel":"[gc] shop","linkInternalFilters":"javascript:,guitarcenter.com","prop7":"[gc] sub category"} Wah-Wah: For swishy, rounded sounds that sort of sound like the guitar is wailing, a wah-wah pedal employs a sweeping filter controlled by a spring-loaded treadle, creating quirky frequency boosts as you work the pedal up and down. A famous version of this pedal is marketed by one manufacturer as the “Crybaby,” in an attempt to describe its tone in one word. The late Jimi Hendrix used one of these pedals to great advantage. All-fifths tuning is a tuning in intervals of perfect fifths like that of a mandolin, cello or violin; other names include "perfect fifths" and "fifths".[35] It has a wide range, thus it requires an appropriate range of string gauges. A high b' string is particularly thin and taut, which can be avoided by shifting the scale down by several steps or by a fifth. From loopers to distortion, effects pedals are a major part of guitar playing these days – and there are two ways to feed these pedals to the amp. You can run them from the front through the instrument input, or you can use an FX loop. The benefit of the latter is that it allows you to insert effects between the preamp and power stage. It’s a complicated topic that relies on a lot of trial and error – not to mention personal taste – but plugging boosters (overdrive, distortion, wah) into the front and then using an FX loop for modulators (chorus, flanger, delay) tends to deliver the best results. I'm going to assume that if you're reading this, you've probably been to two dozen guitar sites, all with varying, if not conflicting information on the correct way to do a setup. I've been there too, I've watched guys on youtube filing down frets with a dremel tool. Now it didn't look right to me, but maybe it works for him. The reality is there is more than one way to do something, and that's OK. If the end result is a great sounding instrument, it doesn't matter how you got there. So I'm going to show you my way of how to get to a great sounding electric guitar. And if you should choose to do something differently, and it works, great!!. Part of having some fun in life is experimenting, I encourage it. Featuring classic Fender design, smooth playability, and simple controls, the Squier Classic Vibe Telecaster '50s is a great first electric guitar. The fixed bridge and quality tuning machines ensure simple and reliable tuning stability—a potential frustration for new players trying to learn on poor quality guitars. Single volume and tone controls along with two bright-sounding single-coil pickups give the beginning player a wide range of tones that are easy to control. The Telecaster has been a mainstay in music for decades and is especially associated with great country, pop, surf and rock sounds. Want to get a good impression of how the SJ 200 sounds? Well Dylan can show you how it strums, Emmylou how it picks, or listen to Pete Townshend thrashing nine bells out of his one on Pinball Wizard. You might also want to take in George Harrison’s Here Comes the Sun or anything by the Everly Brothers. As you'd expect, given the "reassuringly expensive" (i.e. enormous) price tag, the build quality throughout is faultless, superb. The first thing you notice when you sit down to play it is just how sweetly the neck sits in your hand and how easy it is to play. It’s a big lump of money, but when you buy the SJ 200 we guess you’re not just buying the guitar, you’re buying a piece of history. The volume knob can act as a boost which can take your guitar from clean sounds for rhythm playing to dirty overdrive tones for soloing. When playing a song keep your volume knob at 6 or 7 when playing chords or verse parts and when it’s time to deliver a rockin’ solo roll up the volume to 10 and you will not only hear a boost of gain (overdrive) but also a volume lift over any other instruments in the song. Arch top body 16" wide across the top, carved spruce top, back not carved by arched by braces, rosewood back and sides, f-holes, style 45 backstripe, bound ebony fingerboard, 2 white lines inlaid down length of fingerboard at the edges, hexagonal fingerboard inlays on 6 frets (sometimes pearl, sometimes ivoroid), vertical "Martin" pearl peghead logo, nickel plated parts, sunburst top finish. ## They say good things come in small packages. Well, "they" weren't wrong! The Orange Micro Terror Guitar Amplifier Head is no bigger than a lunchbox, but packs enough power to stand up to some of the bigger amplifiers out there, especially when you connect it to a 2x12 or even a 4x12 cab. It features a combination of solid state and valve technology and throws out 20w of pure power thanks to the 1 x 12AX7/ECC83 pre amp valve. Easy to use, affordable and even easier to carry around, you can easily gig with this or use it as a practice amp at home when coupled with the custom built Orange PPC108 1x8 Closed Back Speaker Cabinet. I remember the first time I saw Eddie Van Halen on MTV, the way he played two hands on the fingerboard during his short “Jump” guitar solo. I loved his cool “Frankenstein” guitar, so named because he cobbled together a variety of guitar parts and decorated his creation with colored tape and paint. Even as a 13-year-old who grew up primarily listening to, and playing, classical music, I felt compelled to run out and buy his band’s “1984” LP at my local Tower Records store. Chuck Berry is the true founding forefather of rock and roll. His guitar playing in the mid Fifties defined the true personality and vocabulary of rock and roll guitar so comprehensively and conclusively that it’s impossible to find any rock player who doesn’t still steal his licks, riffs and tricks today. In fact, Berry doesn’t even tour with his own band; instead, he hires local musicians to back him up, because almost everyone all over the world knows how to play his songs. Multi-effects processors come in various configurations, too. Some are floor units that have built-in foot pedals and controllers so they can be operated while your hands remain on your guitar. There are rackmount processors (these can be fitted into a rack of recording gear in line with your signal chain) that incorporate a preamp for your guitar. The more sophisticated models have MIDI I/O for connecting guitar synthesizers to keyboards, modules, computers and other MIDI devices and include a divided pickup to attach to your standard guitar. These processors pack effects libraries that offer combinations of effects, amp models and stompboxes that can number in the thousands. Switching can be controlled by onboard knobs, foot controllers or guitar-picking technique. Expect to pay considerably more for a rackmount effects processor, in a range of three- to four-digit prices. As a player and lover of the instrument, I can tell you unequivically that you are all right. Run a line straight into the board and wood doesn't make a difference and you will add effects in your mix. Or stand in front of a Marshall stack with a couple of humbuckers catching the feedback and you appreciate Honduran mahagony for its tone. You can certainly tell a difference in the sounds you make and especially feel the difference in your hands. And if you can't agree on these concepts, you dishonor the instrument and the craft of luthiers. As my buddy Terry keeps telling me, 'Shut up and play.' Peace out fellow geeks. What we consider as standard size today were not so standard back in the '30s. Back then the "parlor guitar" or "blues box" was commonly used, with its compact body and mid-emphasized tone. Many artists used this instrument to shape many of the musical styles that we have today. The L-00 Standard from Gibson captures the iconic "blues box" faithfully for today's players, adding in their premium touch and modern tech that results in a true timeless museum quality instrument. {"pageName":"[gc] vintage: silvertone","reportSuiteIds":"guitarcenterprod","eVar3":"vintage","prop2":"[gc] vintage","prop1":"[gc] vintage","evar51":"default: united states","prop10":"brands","prop11":"marshall","prop5":"[gc] vintage","prop6":"[gc] vintage","prop3":"[gc] vintage","prop4":"[gc] vintage","channel":"[gc] vintage","linkInternalFilters":"javascript:,guitarcenter.com","prop7":"[gc] vintage"} Sound engineers prevent unwanted, unintended distortion and clipping using a number of methods. They may reduce the gain on microphone preamplifiers on the audio console; use attenuation "pads" (a button on audio console channel strips, DI unit and some bass amplifiers; and use electronic audio compressor effects and limiters to prevent sudden volume peaks from vocal mics from causing unwanted distortion. Body Body shape: Double cutaway Body type: Solid body Body material: Solid wood Top wood: Not applicable Body wood: Swamp Ash body on translucent and burst finishes, Basswood on solid finishes Body finish: Gloss Orientation: Right handed Neck Neck shape: C medium Neck wood: Hard-rock Maple Joint: Bolt-on Scale length: 25.5 in. Truss rod: Standard Neck finish: Gloss Fretboard Material: Rosewoo Here at Dave’s Guitar Shop we are proud to have a staff of world class Guitar and Amp technicians. Be it simple guitar setups, restrings, grafting on broken headstocks or restoring timeless classics our techs work at the highest quality. With a shared experience of over 50 years and access to one of the largest collections of historic guitars for reference you can rest assured that your repair or restoration will be completed accurately and with great care and precision. In the Guitar amplifier world, ANY of the “boutique” brands (some are truly boutique, offering one-of-a-kind amps, but many are just small-scale shops that have a couple lines to choose from and a couple of customizable features) fit this classification of “top shelf,” because they offer the highest quality components, are assembled with the greatest of care (usually by hand with almost no automation), and generally offer tweaks and improvements on older designs. In effect, these amps are “custom built or even bespoke. Because any acoustic guitar can be made into an acoustic-electric, from what I’ve seen — and this is simply an observation, not a blanket statement — most of these sacrifice both quality of guitar and quality of pickup to sell affordable instruments in the name of convenience. So for the introductory acoustic player, here is my advice: Skip the acoustic-electric section and find a plain ol’ acoustic guitar that you like. When the time is right, plenty of companies make a variety of pickups designed for acoustic guitars, which will give you more options when selecting a method of amplifying your acoustic. Like any other electronic products, the amp as well has already gone into notable changes and updates over the years. However even the changes in amp technology happened, many still prefer a tube powered amplifier over a solid state and modeling amps. This is mainly because the sound of a valve is considered the organic or natural on how an amplifier should sound. While the other two is engineered to sound like a tube amp especially the modeling amps. I always say that Jose Feliciano? is indeed one of the greatest guitarists that’s ever lived. Flamenco, latin, bolero, classical, rock ect ect…. You name it and Jose can play it. Why he’s not on Rolling Stone’s 100 greatest guitarist of all time, is beyond anyone’s guess. dont believe me look up on youtube purple haze, the thrill is gone, flight of the bumble bee, Malagueña under Jose Feliciano. the guy can play anything and make it his own. Play heavy rock or metal music? Listen up! These guitars feature a twin horn cutaway shape and a long-neck design. They are lightweight compared to the Les Paul, but can be difficult to get used to. They can feel unbalanced because of the long neck. They have two humbucker pickups like Les Paul guitars but have different volume and tone controls for precise settings. Woods typically used in solid-body electric guitars include alder (brighter, but well rounded), swamp ash (similar to alder, but with more pronounced highs and lows), mahogany (dark, bassy, warm), poplar (similar to alder), and basswood (very neutral).[19] Maple, a very bright tonewood,[19] is also a popular body wood, but is very heavy. For this reason it is often placed as a "cap" on a guitar made primarily of another wood. Cheaper guitars are often made of cheaper woods, such as plywood, pine or agathis—not true hardwoods—which can affect durability and tone. Though most guitars are made of wood, any material may be used. Materials such as plastic, metal, and even cardboard have been used in some instruments.
2018-11-15 19:13:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18559938669204712, "perplexity": 4534.810912128791}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742906.49/warc/CC-MAIN-20181115182450-20181115203605-00033.warc.gz"}
http://acm.hit.edu.cn/hojx/showproblem/1016/
# 1016 - Joseph's problem I Time limit : 10 sMemory limit : 32 mb Submitted : 1904Accepted : 605 ### Problem Description The Joseph's problem is notoriously known. For those who are not familiar with the problem, among n people numbered 1,2...n, standing in circle every mth is going to be executed and only the life of the last remaining person will be saved. Joseph was smart enough to choose the position of the last remaining person, thus saving his life to give the message about the incident. Although many good programmers have been saved since Joseph spread out this information, Joseph's cousin introduced a new variant of the malignant game. This insane character is known for its barbarian ideas and wishes to clean up the world from silly programmers. We had to infiltrate some the agents of the ACM in order to know the process in this new mortal game. In order to save yourself from this evil practice, you must develop a tool capable of predicting which person will be saved. The Destructive Process The persons are eliminated in a very peculiar order; m is a dynamical variable, which each time takes a different value corresponding to the prime numbers' succession (2,3,5,7...). So in order to kill the ith person, Joseph's cousin counts up to the ith prime. ### Input It consists of separate lines containing n [1..3501], and finishes with a 0. ### Output The output will consist in separate lines containing the position of the person which life will be saved. ### Sample Input 6 ### Sample Output 4
2017-09-25 20:41:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5236782431602478, "perplexity": 1945.1031835941408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693363.77/warc/CC-MAIN-20170925201601-20170925221601-00385.warc.gz"}
https://www.maplesoft.com/support/help/maplesim/view.aspx?path=Statistics/AutoCorrelation&L=E
Statistics - Maple Programming Help Home : Support : Online Help : Statistics and Data Analysis : Statistics Package : Quantities : Statistics/AutoCorrelation Statistics AutoCorrelation compute sample autocorrelations of a real Vector Calling Sequence AutoCorrelation(X) AutoCorrelation(X, lags) Parameters X - discrete univariate real time series given as a Vector, list, DataSeries object, Matrix with one column, DataFrame with one column, or TimeSeries object with one dataset. lags - (optional) maximal lag to return, or a range of lags to return. By default all possible lags are returned. Options • scaling One of biased, unbiased, or none.  Default is none. scaling=biased computes ${R}_{k}=\frac{{C}_{k}}{n}$. scaling=unbiased scales each ${C}_{k}$ by $\frac{1}{n-k}$. • raw If this option is given, the output is not normalized so that the first entry is 1 when scaling=unbiased or scaling=none. Description • For a discrete time series X, the AutoCorrelation command computes the autocorrelations ${R}_{k}=\frac{{C}_{k}}{{C}_{0}}$ where ${C}_{k}={\sum }_{t=1}^{n-k}\left({X}_{t}-\mathrm{\mu }\right)\left({X}_{t+k}-\mathrm{\mu }\right)$ for $k=0..n-1$ and  $\mathrm{\mu }$ is the mean of X. • For efficiency, all of the lags are computed at once using a numerical discrete Fourier transform.  Therefore all data provided must have type realcons and all returned solutions are floating-point, even if the problem is specified with exact values. • Note: AutoCorrelation makes use of DiscreteTransforms[FourierTransform] and thus will work strictly in hardware precision, that is, its accuracy is independent of the setting of Digits. • For more time series related commands, see the TimeSeriesAnalysis package. Examples > $\mathrm{with}\left(\mathrm{Statistics}\right):$ > $\mathrm{AutoCorrelation}\left(⟨1,2,1,2,1,2,1,2⟩\right)$ $\left[\begin{array}{c}{1.}\\ {-0.875000000009056}\\ {0.750000000020185}\\ {-0.625000000014873}\\ {0.500000000015000}\\ {-0.375000000015127}\\ {0.250000000009815}\\ {-0.125000000020944}\end{array}\right]$ (1) > $\mathrm{AutoCorrelation}\left(⟨1,2,1,2,1,2,1,2⟩,2\right)$ $\left[\begin{array}{c}{1.}\\ {-0.875000000009056}\\ {0.750000000020185}\end{array}\right]$ (2) > $\mathrm{AutoCorrelation}\left(⟨1,2,1,2,1,2,1,2⟩,0..2\right)$ $\left[\begin{array}{c}{1.}\\ {-0.875000000009056}\\ {0.750000000020185}\end{array}\right]$ (3) > $\mathrm{AutoCorrelation}\left(⟨1,2,1,2,1,2,1,2⟩,1..2\right)$ $\left[\begin{array}{c}{-0.875000000009056}\\ {0.750000000020185}\end{array}\right]$ (4) > $\mathrm{AutoCorrelation}\left(⟨1,2,1,2,1,2,1,2⟩,2,\mathrm{scaling}=\mathrm{unbiased}\right)$ $\left[\begin{array}{c}{1.}\\ {-1.00000000001035}\\ {1.00000000002691}\end{array}\right]$ (5) > $\mathrm{AutoCorrelation}\left(⟨1,2,1,2,1,2,1,2⟩,2,\mathrm{scaling}=\mathrm{biased}\right)$ $\left[\begin{array}{c}{0.0624999999981250}\\ {-0.0546874999989254}\\ {0.0468749999998553}\end{array}\right]$ (6) > $\mathrm{AutoCorrelation}\left(⟨1,2,1,2,1,2,1,2⟩,2,\mathrm{raw}\right)$ $\left[\begin{array}{c}{0.499999999985000}\\ {-0.437499999991403}\\ {0.374999999998843}\end{array}\right]$ (7) > $t≔\mathrm{TimeSeriesAnalysis}:-\mathrm{TimeSeries}\left(\left[\left[1,2,1,2,1,2,1,2\right],\left[8,7,6,5,4,3,2,1\right]\right],\mathrm{header}=\left["Sales","Profits"\right],\mathrm{enddate}="2012-01-01",\mathrm{frequency}="monthly"\right)$ ${t}{≔}\left[\begin{array}{c}{\mathrm{Time series}}\\ {\mathrm{Sales, Profits}}\\ {\mathrm{8 rows of data:}}\\ {\mathrm{2011-06-01 - 2012-01-01}}\end{array}\right]$ (8) > $\mathrm{AutoCorrelation}\left({t}_{\left(\right)..\left(\right),"Sales"},2\right)$ $\left[\begin{array}{c}{1.}\\ {-0.875000000009056}\\ {0.750000000020185}\end{array}\right]$ (9) Autocorrelation can be used to create correlograms which are useful for detecting periodicity in signals. > $R≔⟨\mathrm{seq}\left(\frac{1\left(\mathrm{evalf}\left(\mathrm{sin}\left(17.2i\right)\mathrm{cos}\left(13.8i\right)+1.17\right)+\frac{\mathrm{rand}\left(0..1\right)\left(\right)\cdot 2}{3}\right)}{3},i=1..500\right)⟩:$ > $\mathrm{LineChart}\left(R,\mathrm{size}=\left[0.5,"golden"\right]\right)$ > $\mathrm{AutoCorrelationPlot}\left(R,\mathrm{lags}=100\right)$ Periodicity in a time series can be observed with Autocorrelation. > $\mathrm{with}\left(\mathrm{TimeSeriesAnalysis}\right):$ > $\mathrm{Data}≔\mathrm{Import}\left("datasets/sunspots.csv",\mathrm{base}=\mathrm{datadir},\mathrm{output}=\mathrm{Matrix}\right)$ (10) > $\mathrm{tsData}≔\mathrm{TimeSeries}\left({\mathrm{Data}}_{265..310,2}\right)$ ${\mathrm{tsData}}{≔}\left[\begin{array}{c}{\mathrm{Time series}}\\ {\mathrm{data set}}\\ {\mathrm{46 rows of data:}}\\ {\mathrm{1973 - 2018}}\end{array}\right]$ (11) > $S≔\mathrm{AutoCorrelation}\left(\mathrm{tsData}\right)$ (12) > $\mathrm{AutoCorrelationPlot}\left(\mathrm{GetData}\left(\mathrm{tsData}\right)\right)$ Compatibility • The Statistics[AutoCorrelation] command was introduced in Maple 15.
2020-02-23 01:55:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 37, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9604743123054504, "perplexity": 2068.940805529291}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145742.20/warc/CC-MAIN-20200223001555-20200223031555-00446.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-5-polynomials-and-factoring-5-2-factoring-trinomials-of-the-type-x2-bx-c-5-2-exercise-set-page-317/2
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition) The sum of a negative number and a positive number is negative when the absolute value of the negative number is greater than the absolute value of the positive number. Example: $-4+3=-1$ because $|-4| \gt |3|$ Thus, the given statement is true.
2020-07-04 17:38:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.351029634475708, "perplexity": 193.84574785962818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886516.43/warc/CC-MAIN-20200704170556-20200704200556-00223.warc.gz"}
https://itaibn.wordpress.com/2021/11/
# Quantilizer ≡ Optimizer with a Bounded Amount of Output In 2015 MIRI proposed quantilizers as a decision theory criterion that an AI may use that allows it strive towards a goal while not optimizing too hard at the goal, so that if a superintelligent were a quantilizer there’s a smaller chance it will do something catastrophic like kill all humans and turn all matter on Earth into paperclips because it’s trying really hard to maximize the productivity of a paperclip factory. For a real number $0 \leq p \leq 1$, a p-quantilizer is an agent which, given a goal and a space of possible strategies, picks randomly out of the top proportion $p$, i.e., randomly among any strategy that is better than a proportion $(1-p)$ of all strategies. In fact, a quantilizer is for all practical purposes equivalent to a a perfect optimizer (an agent that always picks the best of all its possible strategies) that is further restricted to have its strategy specifies by a string of bits whose length must be less than $\log_2 (1/p)$. Since the people at MIRI have already considered the possibility of controlling an AI by restricting its ability to perform input and output and have generally decided that they are not satisfied with this, I consider the idea of quantilizers to be a failure. Roughly speaking, my claim is that for a natural number $n$, up to a constant additive factor, a perfect optimizer that is limited to strategies taking at most $n$ bits performs as well in any goal as a $2^{-n}$-quantilizer. More precisely, to explain what I mean by “up to a constant additive factor”, there is a reasonably-sized constant $c$ such that an optimizer with $n+c$ bits of outputs performs as well as a $2^{-n}$-quantilizer, and a $2^{-(n+c)}$-quantilizer performs as well as an optimizer with $n$ bits of output. I also need to assume that the space of strategies is given a rich enough encoding that we can do things like specify a strategy by specifying a computer program whose output is that strategy, or that the environment is rich enough that this can be done implicitly with strategies like “Walk to a computer and type this program, and attach to the computer the accessories necessary to run the strategy specified by the program.” To really make the result true up to a constant factor and not a logarithmic factor or anything like that you need to careful about encoding using stuff like prefix-free encodings but I don’t think that’s really important and will gloss over it. Quantilizers are at least as good as bounded-output optimizers: If a particular strategy is optimal among all strategies of length $\leq n$ bits, then since there are at most $2^{n+1}$ such strategies this one optimizing strategy already makes up a proportion $2^{-(n+1)}$ of all strategies with $\leq n$ bits, so a $2^{-(n+1)}$-quantilizer over strategies with $\leq n$ bits must use exactly this strategy or an equally-performing strategy. If we assume the strategies have a measure biased towards short strategies or use a prefix-free encoding or a subset of the strategies are implicitly something like that, a proportion $\geq \frac {1} {c}$ of all strategies will have length $\leq n$, so a proportion $\geq 2^{-(n+c)}$ will be identical to this $\leq n$-bit optimal strategy, so a $2^{-(n+c)}$-quantilizer will perform at least as well as this strategy. Bounded-output optimizers are at least as good as quantilizers: Using a deterministic pseudo-random number generator, it is possible to precisely specify a sequence $s_0, s_1, s_2, \dots$ of strategies that are fairly sampled from the space of all strategies. A proportion $2^{-n}$ of these strategies are also in the top $2^{-n}$ of performance among all strategies. So the smallest $m$ such that $s_m$ is among the top $2^{-n}$ in performance is statistically certain to have $m \leq C 2^n$. It takes $n + O (1)$ bits to specify the $m$, plus an additional $O (1)$ bits to specify the pseudo-random sequence and the command to perform strategy $s_m$ given $m$. So an optimizer with $n + O (1)$ bits of output can specify the strategy $s_m$ which is $2^{-n}$-quantilizing, so the actual optimizing strategy is at least as good.
2022-05-23 06:08:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 38, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9063867926597595, "perplexity": 410.18055997338627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662555558.23/warc/CC-MAIN-20220523041156-20220523071156-00735.warc.gz"}
https://googology.wikia.org/wiki/Zettillion
10,566 Pages The zettillion is equal to $$10^{3 \cdot 10^{3 \cdot 10^{21}}+3}$$, or $$10^{3 \cdot 10^{3 \cdot \text{sextillion}} + 3}$$ using the short scale definition for sextillion.[1] The term was coined by Jonathan Bowers. ## Contents ### Etymology The name of this number is based on the suffix "-illion" and SI prefix "zetta-". ### Approximations in other notations Notation Approximation Up-arrow notation $$10 \uparrow (3 \times (10 \uparrow (3 \times (10 \uparrow 21)))+3)$$ (exact) Chained arrow notation $$10 \rightarrow (3 \times (10 \rightarrow (3 \times (10 \uparrow 21)))+3)$$ (exact) BEAF $$\{10,\{3\times \{10,3 \times \{10,21\}\}+3\}\}$$ (exact)
2020-10-22 17:38:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9782387614250183, "perplexity": 7383.538439169588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880014.26/warc/CC-MAIN-20201022170349-20201022200349-00167.warc.gz"}
http://linear.ups.edu/jsmath/0299/fcla-jsmath-2.99li34.html
Section CRS  Column and Row Spaces From A First Course in Linear Algebra Version 2.99 http://linear.ups.edu/ Theorem SLSLC showed us that there is a natural correspondence between solutions to linear systems and linear combinations of the columns of the coefficient matrix. This idea motivates the following important definition. Definition CSM Column Space of a Matrix Suppose that A is an m × n matrix with columns \left \{{A}_{1},\kern 1.95872pt {A}_{2},\kern 1.95872pt {A}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {A}_{n}\right \}. Then the column space of A, written C\kern -1.95872pt \left (A\right ), is the subset of {ℂ}^{m} containing all linear combinations of the columns of A, C\kern -1.95872pt \left (A\right ) = \left \langle \left \{{A}_{1},\kern 1.95872pt {A}_{2},\kern 1.95872pt {A}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {A}_{n}\right \}\right \rangle (This definition contains Notation CSM.) Some authors refer to the column space of a matrix as the range, but we will reserve this term for use with linear transformations (Definition RLT). Subsection CSSE: Column Spaces and Systems of Equations Upon encountering any new set, the first question we ask is what objects are in the set, and which objects are not? Here’s an example of one way to answer this question, and it will motivate a theorem that will then answer the question precisely. Example CSMCS Column space of a matrix and consistent systems Archetype D and Archetype E are linear systems of equations, with an identical 3 × 4 coefficient matrix, which we call A here. However, Archetype D is consistent, while Archetype E is not. We can explain this difference by employing the column space of the matrix A. The column vector of constants, b, in Archetype D is b = \left [\array{ 8\cr −12 \cr 4 } \right ] One solution to ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ), as listed, is x = \left [\array{ 7\cr 8 \cr 1\cr 3 } \right ] By Theorem SLSLC, we can summarize this solution as a linear combination of the columns of A that equals b, 7\left [\array{ 2\cr −3 \cr 1 } \right ]+8\left [\array{ 1\cr 4 \cr 1 } \right ]+1\left [\array{ 7\cr −5 \cr 4 } \right ]+3\left [\array{ −7\cr −6 \cr −5 } \right ] = \left [\array{ 8\cr −12 \cr 4 } \right ] = b. This equation says that b is a linear combination of the columns of A, and then by Definition CSM, we can say that b ∈C\kern -1.95872pt \left (A\right ). On the other hand, Archetype E is the linear system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt c\right ), where the vector of constants is c = \left [\array{ 2\cr 3 \cr 2 } \right ] and this system of equations is inconsistent. This means c∉C\kern -1.95872pt \left (A\right ), for if it were, then it would equal a linear combination of the columns of A and Theorem SLSLC would lead us to a solution of the system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt c\right ). So if we fix the coefficient matrix, and vary the vector of constants, we can sometimes find consistent systems, and sometimes inconsistent systems. The vectors of constants that lead to consistent systems are exactly the elements of the column space. This is the content of the next theorem, and since it is an equivalence, it provides an alternate view of the column space. Theorem CSCS Column Spaces and Consistent Systems Suppose A is an m × n matrix and b is a vector of size m. Then b ∈C\kern -1.95872pt \left (A\right ) if and only if ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) is consistent. Proof   () Suppose b ∈C\kern -1.95872pt \left (A\right ). Then we can write b as some linear combination of the columns of A. By Theorem SLSLC we can use the scalars from this linear combination to form a solution to ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ), so this system is consistent. () If ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) is consistent, there is a solution that may be used with Theorem SLSLC to write b as a linear combination of the columns of A. This qualifies b for membership in C\kern -1.95872pt \left (A\right ). This theorem tells us that asking if the system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) is consistent is exactly the same question as asking if b is in the column space of A. Or equivalently, it tells us that the column space of the matrix A is precisely those vectors of constants, b, that can be paired with A to create a system of linear equations ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) that is consistent. Employing Theorem SLEMM we can form the chain of equivalences \eqalignno{ b ∈C\kern -1.95872pt \left (A\right )\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right )\text{ is consistent}\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt Ax = b\text{ for some }x & & } Thus, an alternative (and popular) definition of the column space of an m × n matrix A is \eqalignno{ C\kern -1.95872pt \left (A\right ) & = \left \{\left .y ∈ {ℂ}^{m}\right \vert y = Ax\text{ for some }x ∈ {ℂ}^{n}\right \} = \left \{\left .Ax\right \vert x ∈ {ℂ}^{n}\right \} ⊆ {ℂ}^{m} & & } We recognize this as saying create all the matrix vector products possible with the matrix A by letting x range over all of the possibilities. By Definition MVP we see that this means take all possible linear combinations of the columns of A — precisely the definition of the column space (Definition CSM) we have chosen. Notice how this formulation of the column space looks very much like the definition of the null space of a matrix (Definition NSM), but for a rectangular matrix the column vectors of C\kern -1.95872pt \left (A\right ) and N\kern -1.95872pt \left (A\right ) have different sizes, so the sets are very different. Given a vector b and a matrix A it is now very mechanical to test if b ∈C\kern -1.95872pt \left (A\right ). Form the linear system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ), row-reduce the augmented matrix, \left [\left .A\kern 1.95872pt \right \vert \kern 1.95872pt b\right ], and test for consistency with Theorem RCLS. Here’s an example of this procedure. Example MCSM Membership in the column space of a matrix Consider the column space of the 3 × 4 matrix A, A = \left [\array{ 3 & 2 & 1 &−4\cr −1 & 1 &−2 & 3 \cr 2 &−4& 6 &−8 } \right ] We first show that v = \left [\array{ 18\cr −6 \cr 12 } \right ] is in the column space of A, v ∈C\kern -1.95872pt \left (A\right ). Theorem CSCS says we need only check the consistency of ℒS\kern -1.95872pt \left (A,\kern 1.95872pt v\right ). Form the augmented matrix and row-reduce, \left [\array{ 3 & 2 & 1 &−4&18\cr −1 & 1 &−2 & 3 &−6 \cr 2 &−4& 6 &−8&12 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0& 1 &−2&6\cr 0&\text{1 } &−1 & 1 &0 \cr 0&0& 0 & 0 &0 } \right ] Without a leading 1 in the final column, Theorem RCLS tells us the system is consistent and therefore by Theorem CSCS, v ∈C\kern -1.95872pt \left (A\right ). If we wished to demonstrate explicitly that v is a linear combination of the columns of A, we can find a solution (any solution) of ℒS\kern -1.95872pt \left (A,\kern 1.95872pt v\right ) and use Theorem SLSLC to construct the desired linear combination. For example, set the free variables to {x}_{3} = 2 and {x}_{4} = 1. Then a solution has {x}_{2} = 1 and {x}_{1} = 6. Then by Theorem SLSLC, v = \left [\array{ 18\cr −6 \cr 12 } \right ] = 6\left [\array{ 3\cr −1 \cr 2 } \right ]+1\left [\array{ 2\cr 1 \cr −4 } \right ]+2\left [\array{ 1\cr −2 \cr 6 } \right ]+1\left [\array{ −4\cr 3 \cr −8 } \right ] Now we show that w = \left [\array{ 2\cr 1 \cr −3 } \right ] is not in the column space of A, w∉C\kern -1.95872pt \left (A\right ). Theorem CSCS says we need only check the consistency of ℒS\kern -1.95872pt \left (A,\kern 1.95872pt w\right ). Form the augmented matrix and row-reduce, \left [\array{ 3 & 2 & 1 &−4& 2\cr −1 & 1 &−2 & 3 & 1 \cr 2 &−4& 6 &−8&−3 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0& 1 &−2&0\cr 0&\text{1 } &−1 & 1 &0 \cr 0&0& 0 & 0 &\text{1} } \right ] With a leading 1 in the final column, Theorem RCLS tells us the system is inconsistent and therefore by Theorem CSCS, w∉C\kern -1.95872pt \left (A\right ). Theorem CSCS completes a collection of three theorems, and one definition, that deserve comment. Many questions about spans, linear independence, null space, column spaces and similar objects can be converted to questions about systems of equations (homogeneous or not), which we understand well from our previous results, especially those in Chapter SLE. These previous results include theorems like Theorem RCLS which allows us to quickly decide consistency of a system, and Theorem BNS which allows us to describe solution sets for homogeneous systems compactly as the span of a linearly independent set of column vectors. The table below lists these four definitions and theorems along with a brief reminder of the statement and an example of how the statement is used. Definition NSM Synopsis Null space is solution set of homogeneous system Example General solution sets described by Theorem PSPHS Theorem SLSLC Synopsis Solutions for linear combinations with unknown scalars Example Deciding membership in spans Theorem SLEMM Synopsis System of equations represented by matrix-vector product Example Solution to ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) is {A}^{−1}b when A is nonsingular Theorem CSCS Synopsis Column space vectors create consistent systems Example Deciding membership in column spaces Subsection CSSOC: Column Space Spanned by Original Columns So we have a foolproof, automated procedure for determining membership in C\kern -1.95872pt \left (A\right ). While this works just fine a vector at a time, we would like to have a more useful description of the set C\kern -1.95872pt \left (A\right ) as a whole. The next example will preview the first of two fundamental results about the column space of a matrix. Example CSTW Column space, two ways Consider the 5 × 7 matrix A, \left [\array{ 2 & 4 & 1 &−1& 1 & 4 & 4\cr 1 & 2 & 1 & 0 & 2 & 4 & 7 \cr 0 & 0 & 1 & 4 & 1 & 8 & 7\cr 1 & 2 &−1 & 2 & 1 & 9 & 6 \cr −2&−4& 1 & 3 &−1&−2&−2 } \right ] According to the definition (Definition CSM), the column space of A is C\kern -1.95872pt \left (A\right ) = \left \langle \left \{\left [\array{ 2\cr 1 \cr 0\cr 1 \cr −2 } \right ],\kern 1.95872pt \left [\array{ 4\cr 2 \cr 0\cr 2 \cr −4 } \right ],\kern 1.95872pt \left [\array{ 1\cr 1 \cr 1\cr −1 \cr 1 } \right ],\kern 1.95872pt \left [\array{ −1\cr 0 \cr 4\cr 2 \cr 3 } \right ],\kern 1.95872pt \left [\array{ 1\cr 2 \cr 1\cr 1 \cr −1 } \right ],\kern 1.95872pt \left [\array{ 4\cr 4 \cr 8\cr 9 \cr −2 } \right ],\kern 1.95872pt \left [\array{ 4\cr 7 \cr 7\cr 6 \cr −2 } \right ]\right \}\right \rangle While this is a concise description of an infinite set, we might be able to describe the span with fewer than seven vectors. This is the substance of Theorem BS. So we take these seven vectors and make them the columns of matrix, which is simply the original matrix A again. Now we row-reduce, \left [\array{ 2 & 4 & 1 &−1& 1 & 4 & 4\cr 1 & 2 & 1 & 0 & 2 & 4 & 7 \cr 0 & 0 & 1 & 4 & 1 & 8 & 7\cr 1 & 2 &−1 & 2 & 1 & 9 & 6 \cr −2&−4& 1 & 3 &−1&−2&−2 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&2&0&0&0& 3 &1\cr 0&0 &\text{1 } &0 &0 &−1 &0 \cr 0&0&0&\text{1}&0& 2 &1\cr 0&0 &0 &0 &\text{1 } & 1 &3 \cr 0&0&0&0&0& 0 &0 } \right ] The pivot columns are D = \left \{1,\kern 1.95872pt 3,\kern 1.95872pt 4,\kern 1.95872pt 5\right \}, so we can create the set T = \left \{\left [\array{ 2\cr 1 \cr 0\cr 1 \cr −2 } \right ],\kern 1.95872pt \left [\array{ 1\cr 1 \cr 1\cr −1 \cr 1 } \right ],\kern 1.95872pt \left [\array{ −1\cr 0 \cr 4\cr 2 \cr 3 } \right ],\kern 1.95872pt \left [\array{ 1\cr 2 \cr 1\cr 1 \cr −1 } \right ]\right \} and know that C\kern -1.95872pt \left (A\right ) = \left \langle T\right \rangle and T is a linearly independent set of columns from the set of columns of A. We will now formalize the previous example, which will make it trivial to determine a linearly independent set of vectors that will span the column space of a matrix, and is constituted of just columns of A. Theorem BCS Basis of the Column Space Suppose that A is an m × n matrix with columns {A}_{1},\kern 1.95872pt {A}_{2},\kern 1.95872pt {A}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {A}_{n}, and B is a row-equivalent matrix in reduced row-echelon form with r nonzero rows. Let D = \{{d}_{1},\kern 1.95872pt {d}_{2},\kern 1.95872pt {d}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {d}_{r}\} be the set of column indices where B has leading 1’s. Let T = \left \{{A}_{{d}_{1}},\kern 1.95872pt {A}_{{d}_{2}},\kern 1.95872pt {A}_{{d}_{3}},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {A}_{{d}_{r}}\right \}. Then 1. T is a linearly independent set. 2. C\kern -1.95872pt \left (A\right ) = \left \langle T\right \rangle . Proof   Definition CSM describes the column space as the span of the set of columns of A. Theorem BS tells us that we can reduce the set of vectors used in a span. If we apply Theorem BS to C\kern -1.95872pt \left (A\right ), we would collect the columns of A into a matrix (which would just be A again) and bring the matrix to reduced row-echelon form, which is the matrix B in the statement of the theorem. In this case, the conclusions of Theorem BS applied to A, B and C\kern -1.95872pt \left (A\right ) are exactly the conclusions we desire. This is a nice result since it gives us a handful of vectors that describe the entire column space (through the span), and we believe this set is as small as possible because we cannot create any more relations of linear dependence to trim it down further. Furthermore, we defined the column space (Definition CSM) as all linear combinations of the columns of the matrix, and the elements of the set T are still columns of the matrix (we won’t be so lucky in the next two constructions of the column space). Procedurally this theorem is extremely easy to apply. Row-reduce the original matrix, identify r columns with leading 1’s in this reduced matrix, and grab the corresponding columns of the original matrix. But it is still important to study the proof of Theorem BS and its motivation in Example COV which lie at the root of this theorem. We’ll trot through an example all the same. Example CSOCD Column space, original columns, Archetype D Let’s determine a compact expression for the entire column space of the coefficient matrix of the system of equations that is Archetype D. Notice that in Example CSMCS we were only determining if individual vectors were in the column space or not, now we are describing the entire column space. To start with the application of Theorem BCS, call the coefficient matrix A A = \left [\array{ 2 &1& 7 &−7\cr −3 &4 &−5 &−6 \cr 1 &1& 4 &−5 } \right ]. and row-reduce it to reduced row-echelon form, B = \left [\array{ \text{1}&0&3&−2\cr 0&\text{1 } &1 &−3 \cr 0&0&0& 0 } \right ]. There are leading 1’s in columns 1 and 2, so D = \{1,\kern 1.95872pt 2\}. To construct a set that spans C\kern -1.95872pt \left (A\right ), just grab the columns of A indicated by the set D, so C\kern -1.95872pt \left (A\right ) = \left \langle \left \{\left [\array{ 2\cr −3 \cr 1 } \right ],\kern 1.95872pt \left [\array{ 1\cr 4 \cr 1 } \right ]\right \}\right \rangle . That’s it. In Example CSMCS we determined that the vector c = \left [\array{ 2\cr 3 \cr 2 } \right ] was not in the column space of A. Try to write c as a linear combination of the first two columns of A. What happens? Also in Example CSMCS we determined that the vector b = \left [\array{ 8\cr −12 \cr 4 } \right ] was in the column space of A. Try to write b as a linear combination of the first two columns of A. What happens? Did you find a unique solution to this question? Hmmmm. Subsection CSNM: Column Space of a Nonsingular Matrix Let’s specialize to square matrices and contrast the column spaces of the coefficient matrices in Archetype A and Archetype B. Example CSAA Column space of Archetype A The coefficient matrix in Archetype A is A = \left [\array{ 1&−1&2\cr 2& 1 &1 \cr 1& 1 &0 } \right ] which row-reduces to \left [\array{ \text{1}&0& 1\cr 0&\text{1 } &−1 \cr 0&0& 0 } \right ]. Columns 1 and 2 have leading 1’s, so by Theorem BCS we can write C\kern -1.95872pt \left (A\right ) = \left \langle \left \{{A}_{1},\kern 1.95872pt {A}_{2}\right \}\right \rangle = \left \langle \left \{\left [\array{ 1\cr 2 \cr 1 } \right ],\kern 1.95872pt \left [\array{ −1\cr 1 \cr 1 } \right ]\right \}\right \rangle . We want to show in this example that C\kern -1.95872pt \left (A\right )\mathrel{≠}{ℂ}^{3}. So take, for example, the vector b = \left [\array{ 1\cr 3 \cr 2 } \right ]. Thenthere is no solution to the system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ), or equivalently, it is not possible to write b as a linear combination of {A}_{1} and {A}_{2}. Try one of these two computations yourself. (Or try both!). Since b∉C\kern -1.95872pt \left (A\right ), the column space of A cannot be all of {ℂ}^{3}. So by varying the vector of constants, it is possible to create inconsistent systems of equations with this coefficient matrix (the vector b being one such example). In Example MWIAA we wished to show that the coefficient matrix from Archetype A was not invertible as a first example of a matrix without an inverse. Our device there was to find an inconsistent linear system with A as the coefficient matrix. The vector of constants in that example was b, deliberately chosen outside the column space of A. Example CSAB Column space of Archetype B The coefficient matrix in Archetype B, call it B here, is known to be nonsingular (see Example NM). By Theorem NMUS, the linear system ℒS\kern -1.95872pt \left (B,\kern 1.95872pt b\right ) has a (unique) solution for every choice of b. Theorem CSCS then says that b ∈C\kern -1.95872pt \left (B\right ) for all b ∈ {ℂ}^{3}. Stated differently, there is no way to build an inconsistent system with the coefficient matrix B, but then we knew that already from Theorem NMUS. Example CSAA and Example CSAB together motivate the following equivalence, which says that nonsingular matrices have column spaces that are as big as possible. Theorem CSNM Column Space of a Nonsingular Matrix Suppose A is a square matrix of size n. Then A is nonsingular if and only if C\kern -1.95872pt \left (A\right ) = {ℂ}^{n}. Proof   () Suppose A is nonsingular. We wish to establish the set equality C\kern -1.95872pt \left (A\right ) = {ℂ}^{n}. By Definition CSM, C\kern -1.95872pt \left (A\right ) ⊆ {ℂ}^{n}. To show that {ℂ}^{n} ⊆C\kern -1.95872pt \left (A\right ) choose b ∈ {ℂ}^{n}. By Theorem NMUS, we know the linear system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) has a (unique) solution and therefore is consistent. Theorem CSCS then says that b ∈C\kern -1.95872pt \left (A\right ). So by Definition SE, C\kern -1.95872pt \left (A\right ) = {ℂ}^{n}. () If {e}_{i} is column i of the n × n identity matrix (Definition SUV) and by hypothesis C\kern -1.95872pt \left (A\right ) = {ℂ}^{n}, then {e}_{i} ∈C\kern -1.95872pt \left (A\right ) for 1 ≤ i ≤ n. By Theorem CSCS, the system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt {e}_{i}\right ) is consistent for 1 ≤ i ≤ n. Let {b}_{i} denote any one particular solution to ℒS\kern -1.95872pt \left (A,\kern 1.95872pt {e}_{i}\right ), 1 ≤ i ≤ n. Define the n × n matrix B = \left [{b}_{1}|{b}_{2}|{b}_{3}|\mathop{\mathop{…}}|{b}_{n}\right ]. Then \eqalignno{ AB & = A\left [{b}_{1}|{b}_{2}|{b}_{3}|\mathop{\mathop{…}}|{b}_{n}\right ] & & & & \cr & = [A{b}_{1}|A{b}_{2}|A{b}_{3}|\mathop{\mathop{…}}|A{b}_{n}] & &\text{@(a href="fcla-jsmath-2.99li31.html#definition.MM")Definition MM@(/a)} & & & & \cr & = \left [{e}_{1}|{e}_{2}|{e}_{3}|\mathop{\mathop{…}}|{e}_{n}\right ] & & & & \cr & = {I}_{n} & &\text{@(a href="fcla-jsmath-2.99li28.html#definition.SUV")Definition SUV@(/a)} & & & & \cr & & & & } So the matrix B is a “right-inverse” for A. By Theorem NMRRI, {I}_{n} is a nonsingular matrix, so by Theorem NPNT both A and B are nonsingular. Thus, in particular, A is nonsingular. (Travis Osborne contributed to this proof.) With this equivalence for nonsingular matrices we can update our list, Theorem NME3. Theorem NME4 Nonsingular Matrix Equivalences, Round 4 Suppose that A is a square matrix of size n. The following are equivalent. 1. A is nonsingular. 2. A row-reduces to the identity matrix. 3. The null space of A contains only the zero vector, N\kern -1.95872pt \left (A\right ) = \left \{0\right \}. 4. The linear system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) has a unique solution for every possible choice of b. 5. The columns of A are a linearly independent set. 6. A is invertible. 7. The column space of A is {ℂ}^{n}, C\kern -1.95872pt \left (A\right ) = {ℂ}^{n}. Proof   Since Theorem CSNM is an equivalence, we can add it to the list in Theorem NME3. Subsection RSM: Row Space of a Matrix The rows of a matrix can be viewed as vectors, since they are just lists of numbers, arranged horizontally. So we will transpose a matrix, turning rows into columns, so we can then manipulate rows as column vectors. As a result we will be able to make some new connections between row operations and solutions to systems of equations. OK, here is the second primary definition of this section. Definition RSM Row Space of a Matrix Suppose A is an m × n matrix. Then the row space of A, ℛ\kern -1.95872pt \left (A\right ), is the column space of {A}^{t}, i.e. ℛ\kern -1.95872pt \left (A\right ) = C\kern -1.95872pt \left ({A}^{t}\right ). (This definition contains Notation RSM.) Informally, the row space is the set of all linear combinations of the rows of A. However, we write the rows as column vectors, thus the necessity of using the transpose to make the rows into columns. Additionally, with the row space defined in terms of the column space, all of the previous results of this section can be applied to row spaces. Notice that if A is a rectangular m × n matrix, then C\kern -1.95872pt \left (A\right ) ⊆ {ℂ}^{m}, while ℛ\kern -1.95872pt \left (A\right ) ⊆ {ℂ}^{n} and the two sets are not comparable since they do not even hold objects of the same type. However, when A is square of size n, both C\kern -1.95872pt \left (A\right ) and ℛ\kern -1.95872pt \left (A\right ) are subsets of {ℂ}^{n}, though usually the sets will not be equal (but see Exercise CRS.M20). Example RSAI Row space of Archetype I The coefficient matrix in Archetype I is I = \left [\array{ 1 & 4 & 0 &−1& 0 & 7 &−9\cr 2 & 8 &−1 & 3 & 9 &−13 & 7 \cr 0 & 0 & 2 &−3&−4& 12 &−8\cr −1 &−4 & 2 & 4 & 8 &−31 & 37 } \right ]. To build the row space, we transpose the matrix, { I}^{t} = \left [\array{ 1 & 2 & 0 & −1\cr 4 & 8 & 0 & −4 \cr 0 & −1 & 2 & 2\cr −1 & 3 &−3 & 4 \cr 0 & 9 &−4& 8\cr 7 &−13 & 12 &−31 \cr −9& 7 &−8& 37 } \right ] Then the columns of this matrix are used in a span to build the row space, ℛ\kern -1.95872pt \left (I\right ) = C\kern -1.95872pt \left ({I}^{t}\right ) = \left \langle \left \{\left [\array{ 1\cr 4 \cr 0\cr −1 \cr 0\cr 7 \cr −9 } \right ],\kern 1.95872pt \left [\array{ 2\cr 8 \cr −1\cr 3 \cr 9\cr −13 \cr 7 } \right ],\kern 1.95872pt \left [\array{ 0\cr 0 \cr 2\cr −3 \cr −4\cr 12 \cr −8 } \right ],\kern 1.95872pt \left [\array{ −1\cr −4 \cr 2\cr 4 \cr 8\cr −31 \cr 37} \right ]\right \}\right \rangle . However, we can use Theorem BCS to get a slightly better description. First, row-reduce {I}^{t}, \left [\array{ \text{1}&0&0&−{31\over 7} \cr 0&\text{1}&0& {12\over 7} \cr 0&0&\text{1}& {13\over 7} \cr 0&0&0& 0\cr 0&0 &0 & 0 \cr 0&0&0& 0\cr 0&0 &0 & 0 } \right ]. Since there are leading 1’s in columns with indices D = \left \{1,\kern 1.95872pt 2,\kern 1.95872pt 3\right \}, the column space of {I}^{t} can be spanned by just the first three columns of {I}^{t}, ℛ\kern -1.95872pt \left (I\right ) = C\kern -1.95872pt \left ({I}^{t}\right ) = \left \langle \left \{\left [\array{ 1\cr 4 \cr 0\cr −1 \cr 0\cr 7 \cr −9 } \right ],\kern 1.95872pt \left [\array{ 2\cr 8 \cr −1\cr 3 \cr 9\cr −13 \cr 7 } \right ],\kern 1.95872pt \left [\array{ 0\cr 0 \cr 2\cr −3 \cr −4\cr 12 \cr −8 } \right ]\right \}\right \rangle . The row space would not be too interesting if it was simply the column space of the transpose. However, when we do row operations on a matrix we have no effect on the many linear combinations that can be formed with the rows of the matrix. This is stated more carefully in the following theorem. Theorem REMRS Row-Equivalent Matrices have equal Row Spaces Suppose A and B are row-equivalent matrices. Then ℛ\kern -1.95872pt \left (A\right ) = ℛ\kern -1.95872pt \left (B\right ). Proof   Two matrices are row-equivalent (Definition REM) if one can be obtained from another by a sequence of (possibly many) row operations. We will prove the theorem for two matrices that differ by a single row operation, and then this result can be applied repeatedly to get the full statement of the theorem. The row spaces of A and B are spans of the columns of their transposes. For each row operation we perform on a matrix, we can define an analogous operation on the columns. Perhaps we should call these column operations. Instead, we will still call them row operations, but we will apply them to the columns of the transposes. Refer to the columns of {A}^{t} and {B}^{t} as {A}_{i} and {B}_{i}, 1 ≤ i ≤ m. The row operation that switches rows will just switch columns of the transposed matrices. This will have no effect on the possible linear combinations formed by the columns. Suppose that {B}^{t} is formed from {A}^{t} by multiplying column {A}_{t} by α\mathrel{≠}0. In other words, {B}_{t} = α{A}_{t}, and {B}_{i} = {A}_{i} for all i\mathrel{≠}t. We need to establish that two sets are equal, C\kern -1.95872pt \left ({A}^{t}\right ) = C\kern -1.95872pt \left ({B}^{t}\right ). We will take a generic element of one and show that it is contained in the other. \eqalignno{ {β}_{1}{B}_{1}+ &{β}_{2}{B}_{2} + {β}_{3}{B}_{3} + \mathrel{⋯} + {β}_{t}{B}_{t} + \mathrel{⋯} + {β}_{m}{B}_{m} & & \cr & = {β}_{1}{A}_{1} + {β}_{2}{A}_{2} + {β}_{3}{A}_{3} + \mathrel{⋯} + {β}_{t}\left (α{A}_{t}\right ) + \mathrel{⋯} + {β}_{m}{A}_{m} & & \cr & = {β}_{1}{A}_{1} + {β}_{2}{A}_{2} + {β}_{3}{A}_{3} + \mathrel{⋯} + \left (α{β}_{t}\right ){A}_{t} + \mathrel{⋯} + {β}_{m}{A}_{m} & & } says that C\kern -1.95872pt \left ({B}^{t}\right ) ⊆C\kern -1.95872pt \left ({A}^{t}\right ). Similarly, \eqalignno{ {γ}_{1}{A}_{1}+ &{γ}_{2}{A}_{2} + {γ}_{3}{A}_{3} + \mathrel{⋯} + {γ}_{t}{A}_{t} + \mathrel{⋯} + {γ}_{m}{A}_{m} & & \cr & = {γ}_{1}{A}_{1} + {γ}_{2}{A}_{2} + {γ}_{3}{A}_{3} + \mathrel{⋯} + \left ({{γ}_{t}\over α} α\right ){A}_{t} + \mathrel{⋯} + {γ}_{m}{A}_{m} & & \cr & = {γ}_{1}{A}_{1} + {γ}_{2}{A}_{2} + {γ}_{3}{A}_{3} + \mathrel{⋯} + {{γ}_{t}\over α} \left (α{A}_{t}\right ) + \mathrel{⋯} + {γ}_{m}{A}_{m} & & \cr & = {γ}_{1}{B}_{1} + {γ}_{2}{B}_{2} + {γ}_{3}{B}_{3} + \mathrel{⋯} + {{γ}_{t}\over α} {B}_{t} + \mathrel{⋯} + {γ}_{m}{B}_{m} & & } says that C\kern -1.95872pt \left ({A}^{t}\right ) ⊆C\kern -1.95872pt \left ({B}^{t}\right ). So ℛ\kern -1.95872pt \left (A\right ) = C\kern -1.95872pt \left ({A}^{t}\right ) = C\kern -1.95872pt \left ({B}^{t}\right ) = ℛ\kern -1.95872pt \left (B\right ) when a single row operation of the second type is performed. Suppose now that {B}^{t} is formed from {A}^{t} by replacing {A}_{t} with α{A}_{s} + {A}_{t} for some α ∈ {ℂ}^{} and s\mathrel{≠}t. In other words, {B}_{t} = α{A}_{s} + {A}_{t}, and {B}_{i} = {A}_{i} for i\mathrel{≠}t. \eqalignno{ {β}_{1}{B}_{1}+&{β}_{2}{B}_{2} + {β}_{3}{B}_{3} + \mathrel{⋯} + {β}_{s}{B}_{s} + \mathrel{⋯} + {β}_{t}{B}_{t} + \mathrel{⋯} + {β}_{m}{B}_{m} && \cr & = {β}_{1}{A}_{1} + {β}_{2}{A}_{2} + {β}_{3}{A}_{3} + \mathrel{⋯} + {β}_{s}{A}_{s} + \mathrel{⋯} + {β}_{t}\left (α{A}_{s} + {A}_{t}\right ) + \mathrel{⋯} + {β}_{m}{A}_{m} && \cr & = {β}_{1}{A}_{1} + {β}_{2}{A}_{2} + {β}_{3}{A}_{3} + \mathrel{⋯} + {β}_{s}{A}_{s} + \mathrel{⋯} + \left ({β}_{t}α\right ){A}_{s} + {β}_{t}{A}_{t} + \mathrel{⋯} + {β}_{m}{A}_{m}&& \cr & = {β}_{1}{A}_{1} + {β}_{2}{A}_{2} + {β}_{3}{A}_{3} + \mathrel{⋯} + {β}_{s}{A}_{s} + \left ({β}_{t}α\right ){A}_{s} + \mathrel{⋯} + {β}_{t}{A}_{t} + \mathrel{⋯} + {β}_{m}{A}_{m}&& \cr & = {β}_{1}{A}_{1} + {β}_{2}{A}_{2} + {β}_{3}{A}_{3} + \mathrel{⋯} + \left ({β}_{s} + {β}_{t}α\right ){A}_{s} + \mathrel{⋯} + {β}_{t}{A}_{t} + \mathrel{⋯} + {β}_{m}{A}_{m} && } says that C\kern -1.95872pt \left ({B}^{t}\right ) ⊆C\kern -1.95872pt \left ({A}^{t}\right ). Similarly, \eqalignno{ {γ}_{1}&{A}_{1} + {γ}_{2}{A}_{2} + {γ}_{3}{A}_{3} + \mathrel{⋯} + {γ}_{s}{A}_{s} + \mathrel{⋯} + {γ}_{t}{A}_{t} + \mathrel{⋯} + {γ}_{m}{A}_{m} && \cr & = {γ}_{1}{A}_{1} + {γ}_{2}{A}_{2} + {γ}_{3}{A}_{3} + \mathrel{⋯} + {γ}_{s}{A}_{s} + \mathrel{⋯} + \left (−α{γ}_{t}{A}_{s} + α{γ}_{t}{A}_{s}\right ) + {γ}_{t}{A}_{t} + \mathrel{⋯} + {γ}_{m}{A}_{m} && \cr & = {γ}_{1}{A}_{1} + {γ}_{2}{A}_{2} + {γ}_{3}{A}_{3} + \mathrel{⋯} + \left (−α{γ}_{t}{A}_{s}\right ) + {γ}_{s}{A}_{s} + \mathrel{⋯} + \left (α{γ}_{t}{A}_{s} + {γ}_{t}{A}_{t}\right ) + \mathrel{⋯} + {γ}_{m}{A}_{m}&& \cr & = {γ}_{1}{A}_{1} + {γ}_{2}{A}_{2} + {γ}_{3}{A}_{3} + \mathrel{⋯} + \left (−α{γ}_{t} + {γ}_{s}\right ){A}_{s} + \mathrel{⋯} + {γ}_{t}\left (α{A}_{s} + {A}_{t}\right ) + \mathrel{⋯} + {γ}_{m}{A}_{m} && \cr & = {γ}_{1}{B}_{1} + {γ}_{2}{B}_{2} + {γ}_{3}{B}_{3} + \mathrel{⋯} + \left (−α{γ}_{t} + {γ}_{s}\right ){B}_{s} + \mathrel{⋯} + {γ}_{t}{B}_{t} + \mathrel{⋯} + {γ}_{m}{B}_{m} && } says that C\kern -1.95872pt \left ({A}^{t}\right ) ⊆C\kern -1.95872pt \left ({B}^{t}\right ). So ℛ\kern -1.95872pt \left (A\right ) = C\kern -1.95872pt \left ({A}^{t}\right ) = C\kern -1.95872pt \left ({B}^{t}\right ) = ℛ\kern -1.95872pt \left (B\right ) when a single row operation of the third type is performed. So the row space of a matrix is preserved by each row operation, and hence row spaces of row-equivalent matrices are equal sets. Example RSREM Row spaces of two row-equivalent matrices In Example TREM we saw that the matrices \eqalignno{ A & = \left [\array{ 2&−1& 3 &4\cr 5& 2 &−2 &3 \cr 1& 1 & 0 &6 } \right ] &B & = \left [\array{ 1& 1 & 0 & 6\cr 3& 0 &−2 &−9 \cr 2&−1& 3 & 4 } \right ] & & & & } are row-equivalent by demonstrating a sequence of two row operations that converted A into B. Applying Theorem REMRS we can say ℛ\kern -1.95872pt \left (A\right ) = \left \langle \left \{\left [\array{ 2\cr −1 \cr 3\cr 4 } \right ],\kern 1.95872pt \left [\array{ 5\cr 2 \cr −2\cr 3 } \right ],\kern 1.95872pt \left [\array{ 1\cr 1 \cr 0\cr 6 } \right ]\right \}\right \rangle = \left \langle \left \{\left [\array{ 1\cr 1 \cr 0\cr 6 } \right ],\kern 1.95872pt \left [\array{ 3\cr 0 \cr −2\cr −9 } \right ],\kern 1.95872pt \left [\array{ 2\cr −1 \cr 3\cr 4 } \right ]\right \}\right \rangle = ℛ\kern -1.95872pt \left (B\right ) Theorem REMRS is at its best when one of the row-equivalent matrices is in reduced row-echelon form. The vectors that correspond to the zero rows can be ignored. (Who needs the zero vector when building a span? See Exercise LI.T10.) The echelon pattern insures that the nonzero rows yield vectors that are linearly independent. Here’s the theorem. Theorem BRS Basis for the Row Space Suppose that A is a matrix and B is a row-equivalent matrix in reduced row-echelon form. Let S be the set of nonzero columns of {B}^{t}. Then 1. ℛ\kern -1.95872pt \left (A\right ) = \left \langle S\right \rangle . 2. S is a linearly independent set. Proof   From Theorem REMRS we know that ℛ\kern -1.95872pt \left (A\right ) = ℛ\kern -1.95872pt \left (B\right ). If B has any zero rows, these correspond to columns of {B}^{t} that are the zero vector. We can safely toss out the zero vector in the span construction, since it can be recreated from the nonzero vectors by a linear combination where all the scalars are zero. So ℛ\kern -1.95872pt \left (A\right ) = \left \langle S\right \rangle . Suppose B has r nonzero rows and let D = \left \{{d}_{1},\kern 1.95872pt {d}_{2},\kern 1.95872pt {d}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {d}_{r}\right \} denote the column indices of B that have a leading one in them. Denote the r column vectors of {B}^{t}, the vectors in S, as {B}_{1},\kern 1.95872pt {B}_{2},\kern 1.95872pt {B}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {B}_{r}. To show that S is linearly independent, start with a relation of linear dependence {α}_{1}{B}_{1} + {α}_{2}{B}_{2} + {α}_{3}{B}_{3} + \mathrel{⋯} + {α}_{r}{B}_{r} = 0 Now consider this vector equality in location {d}_{i}. Since B is in reduced row-echelon form, the entries of column {d}_{i} of B are all zero, except for a (leading) 1 in row i. Thus, in {B}^{t}, row {d}_{i} is all zeros, excepting a 1 in column i. So, for 1 ≤ i ≤ r, \eqalignno{ 0 & ={ \left [0\right ]}_{{d}_{i}} & &\text{@(a href="fcla-jsmath-2.99li18.html#definition.ZCV")Definition ZCV@(/a)} & & & & \cr & ={ \left [{α}_{1}{B}_{1} + {α}_{2}{B}_{2} + {α}_{3}{B}_{3} + \mathrel{⋯} + {α}_{r}{B}_{r}\right ]}_{{d}_{i}} & &\text{@(a href="fcla-jsmath-2.99li26.html#definition.RLDCV")Definition RLDCV@(/a)} & & & & \cr & ={ \left [{α}_{1}{B}_{1}\right ]}_{{d}_{i}} +{ \left [{α}_{2}{B}_{2}\right ]}_{{d}_{i}} +{ \left [{α}_{3}{B}_{3}\right ]}_{{d}_{i}} + \mathrel{⋯} +{ \left [{α}_{r}{B}_{r}\right ]}_{{d}_{i}}+ & &\text{@(a href="fcla-jsmath-2.99li30.html#definition.MA")Definition MA@(/a)} & & & & \cr & = {α}_{1}{\left [{B}_{1}\right ]}_{{d}_{i}} + {α}_{2}{\left [{B}_{2}\right ]}_{{d}_{i}} + {α}_{3}{\left [{B}_{3}\right ]}_{{d}_{i}} + \mathrel{⋯} + {α}_{r}{\left [{B}_{r}\right ]}_{{d}_{i}}+ & &\text{@(a href="fcla-jsmath-2.99li30.html#definition.MSM")Definition MSM@(/a)} & & & & \cr & = {α}_{1}(0) + {α}_{2}(0) + {α}_{3}(0) + \mathrel{⋯} + {α}_{i}(1) + \mathrel{⋯} + {α}_{r}(0) & &\text{@(a href="fcla-jsmath-2.99li18.html#definition.RREF")Definition RREF@(/a)} & & & & \cr & = {α}_{i} & & & & } So we conclude that {α}_{i} = 0 for all 1 ≤ i ≤ r, establishing the linear independence of S (Definition LICV). Example IAS Improving a span Suppose in the course of analyzing a matrix (its column space, its null space, its…) we encounter the following set of vectors, described by a span X = \left \langle \left \{\left [\array{ 1\cr 2 \cr 1\cr 6 \cr 6 } \right ],\kern 1.95872pt \left [\array{ 3\cr −1 \cr 2\cr −1 \cr 6 } \right ],\kern 1.95872pt \left [\array{ 1\cr −1 \cr 0\cr −1 \cr −2 } \right ],\kern 1.95872pt \left [\array{ −3\cr 2 \cr −3\cr 6 \cr −10 } \right ]\right \}\right \rangle Let A be the matrix whose rows are the vectors in X, so by design X = ℛ\kern -1.95872pt \left (A\right ), A = \left [\array{ 1 & 2 & 1 & 6 & 6\cr 3 &−1 & 2 &−1 & 6 \cr 1 &−1& 0 &−1& −2\cr −3 & 2 &−3 & 6 &−10 } \right ] Row-reduce A to form a row-equivalent matrix in reduced row-echelon form, B = \left [\array{ \text{1}&0&0& 2 &−1\cr 0&\text{1 } &0 & 3 & 1 \cr 0&0&\text{1}&−2& 5\cr 0&0 &0 & 0 & 0 } \right ] Then Theorem BRS says we can grab the nonzero columns of {B}^{t} and write X = ℛ\kern -1.95872pt \left (A\right ) = ℛ\kern -1.95872pt \left (B\right ) = \left \langle \left \{\left [\array{ 1\cr 0 \cr 0\cr 2 \cr −1 } \right ],\kern 1.95872pt \left [\array{ 0\cr 1 \cr 0\cr 3 \cr 1 } \right ],\kern 1.95872pt \left [\array{ 0\cr 0 \cr 1\cr −2 \cr 5 } \right ]\right \}\right \rangle These three vectors provide a much-improved description of X. There are fewer vectors, and the pattern of zeros and ones in the first three entries makes it easier to determine membership in X. And all we had to do was row-reduce the right matrix and toss out a zero row. Next to row operations themselves, this is probably the most powerful computational technique at your disposal as it quickly provides a much improved description of a span, any span. Theorem BRS and the techniques of Example IAS will provide yet another description of the column space of a matrix. First we state a triviality as a theorem, so we can reference it later. Theorem CSRST Column Space, Row Space, Transpose Suppose A is a matrix. Then C\kern -1.95872pt \left (A\right ) = ℛ\kern -1.95872pt \left ({A}^{t}\right ). Proof \eqalignno{ C\kern -1.95872pt \left (A\right ) & = C\kern -1.95872pt \left ({\left ({A}^{t}\right )}^{t}\right ) & &\text{@(a href="fcla-jsmath-2.99li30.html#theorem.TT")Theorem TT@(/a)} & & & & \cr & = ℛ\kern -1.95872pt \left ({A}^{t}\right ) & &\text{@(a href="#definition.RSM")Definition RSM@(/a)} & & & & } So to find another expression for the column space of a matrix, build its transpose, row-reduce it, toss out the zero rows, and convert the nonzero rows to column vectors to yield an improved set for the span construction. We’ll do Archetype I, then you do Archetype J. Example CSROI Column space from row operations, Archetype I To find the column space of the coefficient matrix of Archetype I, we proceed as follows. The matrix is I = \left [\array{ 1 & 4 & 0 &−1& 0 & 7 &−9\cr 2 & 8 &−1 & 3 & 9 &−13 & 7 \cr 0 & 0 & 2 &−3&−4& 12 &−8\cr −1 &−4 & 2 & 4 & 8 &−31 & 37 } \right ]. The transpose is \left [\array{ 1 & 2 & 0 & −1\cr 4 & 8 & 0 & −4 \cr 0 & −1 & 2 & 2\cr −1 & 3 &−3 & 4 \cr 0 & 9 &−4& 8\cr 7 &−13 & 12 &−31 \cr −9& 7 &−8& 37 } \right ]. Row-reduced this becomes, \left [\array{ \text{1}&0&0&−{31\over 7} \cr 0&\text{1}&0& {12\over 7} \cr 0&0&\text{1}& {13\over 7} \cr 0&0&0& 0\cr 0&0 &0 & 0 \cr 0&0&0& 0\cr 0&0 &0 & 0 } \right ]. Now, using Theorem CSRST and Theorem BRS C\kern -1.95872pt \left (I\right ) = ℛ\kern -1.95872pt \left ({I}^{t}\right ) = \left \langle \left \{\left [\array{ 1\cr 0 \cr 0 \cr −{31\over 7}} \right ],\kern 1.95872pt \left [\array{ 0\cr 1 \cr 0 \cr {12\over 7} } \right ],\kern 1.95872pt \left [\array{ 0\cr 0 \cr 1 \cr {13\over 7} } \right ]\right \}\right \rangle . This is a very nice description of the column space. Fewer vectors than the 7 involved in the definition, and the pattern of the zeros and ones in the first 3 slots can be used to advantage. For example, Archetype I is presented as a consistent system of equations with a vector of constants b = \left [\array{ 3\cr 9 \cr 1\cr 4 } \right ]. Since ℒS\kern -1.95872pt \left (I,\kern 1.95872pt b\right ) is consistent, Theorem CSCS tells us that b ∈C\kern -1.95872pt \left (I\right ). But we could see this quickly with the following computation, which really only involves any work in the 4th entry of the vectors as the scalars in the linear combination are dictated by the first three entries of b. b = \left [\array{ 3\cr 9 \cr 1\cr 4 } \right ] = 3\left [\array{ 1\cr 0 \cr 0 \cr −{31\over 7}} \right ]+9\left [\array{ 0\cr 1 \cr 0 \cr {12\over 7} } \right ]+1\left [\array{ 0\cr 0 \cr 1 \cr {13\over 7} } \right ] Can you now rapidly construct several vectors, b, so that ℒS\kern -1.95872pt \left (I,\kern 1.95872pt b\right ) is consistent, and several more so that the system is inconsistent? 1. Write the column space of the matrix below as the span of a set of three vectors and explain your choice of method. \left [\array{ 1 &3&1&3\cr 2 &0 &1 &1 \cr −1&2&1&0} \right ] 2. Suppose that A is an n × n nonsingular matrix. What can you say about its column space? 3. Is the vector \left [\array{ 0\cr 5 \cr 2\cr 3 } \right ] in the row space of the following matrix? Why or why not? \left [\array{ 1 &3&1&3\cr 2 &0 &1 &1 \cr −1&2&1&0} \right ] Subsection EXC: Exercises C20 For parts (a), (b) and c, find a set of linearly independent vectors X so that C\kern -1.95872pt \left (A\right ) = \left \langle X\right \rangle , and a set of linearly independent vectors Y so that ℛ\kern -1.95872pt \left (A\right ) = \left \langle Y \right \rangle . 1. A = \left [\array{ 1& 2 &3& 1\cr 0& 1 &1 & 2 \cr 1&−1&2& 3\cr 1& 1 &2 &−1 } \right ] 2. A = \left [\array{ 1&2& 1 &1&1\cr 3&2 &−1 &4 &5 \cr 0&1& 1 &1&2} \right ] 3. A = \left [\array{ 2&1& 0\cr 3&0 & 3 \cr 1&2&−3\cr 1&1 &−1 \cr 1&1&−1} \right ] 4. From your results in parts (a) - (c), can you formulate a conjecture about the sets X and Y ? Contributed by Chris Black C30 Example CSOCD expresses the column space of the coefficient matrix from Archetype D (call the matrix A here) as the span of the first two columns of A. In Example CSMCS we determined that the vector c = \left [\array{ 2\cr 3 \cr 2 } \right ] was not in the column space of A and that the vector b = \left [\array{ 8\cr −12 \cr 4 } \right ] was in the column space of A. Attempt to write c and b as linear combinations of the two vectors in the span construction for the column space in Example CSOCD and record your observations. Contributed by Robert Beezer Solution [773] C31 For the matrix A below find a set of vectors T meeting the following requirements: (1) the span of T is the column space of A, that is, \left \langle T\right \rangle = C\kern -1.95872pt \left (A\right ), (2) T is linearly independent, and (3) the elements of T are columns of A. A = \left [\array{ 2 & 1 & 4 &−1&2\cr 1 &−1 & 5 & 1 &1 \cr −1& 2 &−7& 0 &1\cr 2 &−1 & 8 &−1 &2 } \right ] Contributed by Robert Beezer Solution [773] C32 In Example CSAA, verify that the vector b is not in the column space of the coefficient matrix. Contributed by Robert Beezer C33 Find a linearly independent set S so that the span of S, \left \langle S\right \rangle , is row space of the matrix B, and S is linearly independent. B = \left [\array{ 2 &3&1& 1\cr 1 &1 &0 & 1 \cr −1&2&3&−4 } \right ] Contributed by Robert Beezer Solution [774] C34 For the 3 × 4 matrix A and the column vector y ∈ {ℂ}^{4} given below, determine if y is in the row space of A. In other words, answer the question: y ∈ℛ\kern -1.95872pt \left (A\right )? \eqalignno{ A & = \left [\array{ −2& 6 &7&−1\cr 7 &−3 &0 &−3 \cr 8 & 0 &7& 6 } \right ] &y & = \left [\array{ 2\cr 1 \cr 3\cr −2 } \right ] & & & & } Contributed by Robert Beezer Solution [775] C35 For the matrix A below, find two different linearly independent sets whose spans equal the column space of A, C\kern -1.95872pt \left (A\right ), such that (a) the elements are each columns of A. (b) the set is obtained by a procedure that is substantially different from the procedure you use in part (a). \eqalignno{ A & = \left [\array{ 3 & 5 &1&−2\cr 1 & 2 &3 & 3 \cr −3&−4&7&13 } \right ] & & } Contributed by Robert Beezer Solution [776] C40 The following archetypes are systems of equations. For each system, write the vector of constants as a linear combination of the vectors in the span construction for the column space provided by Theorem BCS (these vectors are listed for each of these archetypes). Archetype A Archetype B Archetype C Archetype D Archetype E Archetype F Archetype G Archetype H Archetype I Archetype J Contributed by Robert Beezer C42 The following archetypes are either matrices or systems of equations with coefficient matrices. For each matrix, compute a set of column vectors such that (1) the vectors are columns of the matrix, (2) the set is linearly independent, and (3) the span of the set is the column space of the matrix. See Theorem BCS. Archetype A Archetype B Archetype C Archetype D/Archetype E Archetype F Archetype G/Archetype H Archetype I Archetype J Archetype K Archetype L Contributed by Robert Beezer C50 The following archetypes are either matrices or systems of equations with coefficient matrices. For each matrix, compute a set of column vectors such that (1) the set is linearly independent, and (2) the span of the set is the row space of the matrix. See Theorem BRS. Archetype A Archetype B Archetype C Archetype D/Archetype E Archetype F Archetype G/Archetype H Archetype I Archetype J Archetype K Archetype L Contributed by Robert Beezer C51 The following archetypes are either matrices or systems of equations with coefficient matrices. For each matrix, compute the column space as the span of a linearly independent set as follows: transpose the matrix, row-reduce, toss out zero rows, convert rows into column vectors. See Example CSROI. Archetype A Archetype B Archetype C Archetype D/Archetype E Archetype F Archetype G/Archetype H Archetype I Archetype J Archetype K Archetype L Contributed by Robert Beezer C52 The following archetypes are systems of equations. For each different coefficient matrix build two new vectors of constants. The first should lead to a consistent system and the second should lead to an inconsistent system. Descriptions of the column space as spans of linearly independent sets of vectors with “nice patterns” of zeros and ones might be most useful and instructive in connection with this exercise. (See the end of Example CSROI.) Archetype A Archetype B Archetype C Archetype D/Archetype E Archetype F Archetype G/Archetype H Archetype I Archetype J Contributed by Robert Beezer M10 For the matrix E below, find vectors b and c so that the system ℒS\kern -1.95872pt \left (E,\kern 1.95872pt b\right ) is consistent and ℒS\kern -1.95872pt \left (E,\kern 1.95872pt c\right ) is inconsistent. E = \left [\array{ −2& 1 &1&0\cr 3 &−1 &0 &2 \cr 4 & 1 &1&6 } \right ] Contributed by Robert Beezer Solution [778] M20 Usually the column space and null space of a matrix contain vectors of different sizes. For a square matrix, though, the vectors in these two sets are the same size. Usually the two sets will be different. Construct an example of a square matrix where the column space and null space are equal. Contributed by Robert Beezer Solution [779] M21 We have a variety of theorems about how to create column spaces and row spaces and they frequently involve row-reducing a matrix. Here is a procedure that some try to use to get a column space. Begin with an m × n matrix A and row-reduce to a matrix B with columns {B}_{1},\kern 1.95872pt {B}_{2},\kern 1.95872pt {B}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {B}_{n}. Then form the column space of A as C\kern -1.95872pt \left (A\right ) = \left \langle \left \{{B}_{1},\kern 1.95872pt {B}_{2},\kern 1.95872pt {B}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {B}_{n}\right \}\right \rangle = C\kern -1.95872pt \left (B\right ) This is not not a legitimate procedure, and therefore is not a theorem. Construct an example to show that the procedure will not in general create the column space of A. Contributed by Robert Beezer Solution [779] T40 Suppose that A is an m × n matrix and B is an n × p matrix. Prove that the column space of AB is a subset of the column space of A, that is C\kern -1.95872pt \left (AB\right ) ⊆C\kern -1.95872pt \left (A\right ). Provide an example where the opposite is false, in other words give an example where C\kern -1.95872pt \left (A\right )⊈C\kern -1.95872pt \left (AB\right ). (Compare with Exercise MM.T40.) Contributed by Robert Beezer Solution [779] T41 Suppose that A is an m × n matrix and B is an n × n nonsingular matrix. Prove that the column space of A is equal to the column space of AB, that is C\kern -1.95872pt \left (A\right ) = C\kern -1.95872pt \left (AB\right ). (Compare with Exercise MM.T41 and Exercise CRS.T40.) Contributed by Robert Beezer Solution [780] T45 Suppose that A is an m × n matrix and B is an n × m matrix where AB is a nonsingular matrix. Prove that (1) N\kern -1.95872pt \left (B\right ) = \left \{0\right \} (2) C\kern -1.95872pt \left (B\right ) ∩N\kern -1.95872pt \left (A\right ) = \left \{0\right \} Discuss the case when m = n in connection with Theorem NPNT. Contributed by Robert Beezer Solution [781] Subsection SOL: Solutions C30 Contributed by Robert Beezer Statement [763] In each case, begin with a vector equation where one side contains a linear combination of the two vectors from the span construction that gives the column space of A with unknowns for scalars, and then use Theorem SLSLC to set up a system of equations. For c, the corresponding system has no solution, as we would expect. For b there is a solution, as we would expect. What is interesting is that the solution is unique. This is a consequence of the linear independence of the set of two vectors in the span construction. If we wrote b as a linear combination of all four columns of A, then there would be infinitely many ways to do this. C31 Contributed by Robert Beezer Statement [764] Theorem BCS is the right tool for this problem. Row-reduce this matrix, identify the pivot columns and then grab the corresponding columns of A for the set T. The matrix A row-reduces to \left [\array{ \text{1}&0& 3 &0&0\cr 0&\text{1 } &−2 &0 &0 \cr 0&0& 0 &\text{1}&0\cr 0&0 & 0 &0 &\text{1} } \right ] So D = \left \{1,\kern 1.95872pt 2,\kern 1.95872pt 4,\kern 1.95872pt 5\right \} and then T = \left \{{A}_{1},\kern 1.95872pt {A}_{2},\kern 1.95872pt {A}_{4},\kern 1.95872pt {A}_{5}\right \} = \left \{\left [\array{ 2\cr 1 \cr −1\cr 2 } \right ],\kern 1.95872pt \left [\array{ 1\cr −1 \cr 2\cr −1 } \right ],\kern 1.95872pt \left [\array{ −1\cr 1 \cr 0\cr −1 } \right ],\kern 1.95872pt \left [\array{ 2\cr 1 \cr 1\cr 2 } \right ]\right \} has the requested properties. C33 Contributed by Robert Beezer Statement [765] Theorem BRS is the most direct route to a set with these properties. Row-reduce, toss zero rows, keep the others. You could also transpose the matrix, then look for the column space by row-reducing the transpose and applying Theorem BCS. We’ll do the former, B\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&−1& 2\cr 0&\text{1 } & 1 &−1 \cr 0&0& 0 & 0 } \right ] So the set S is S = \left \{\left [\array{ 1\cr 0 \cr −1\cr 2 } \right ],\kern 1.95872pt \left [\array{ 0\cr 1 \cr 1\cr −1 } \right ]\right \} C34 Contributed by Robert Beezer Statement [766] \eqalignno{ y ∈ℛ\kern -1.95872pt \left (A\right ) &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt y ∈C\kern -1.95872pt \left ({A}^{t}\right ) & &\text{@(a href="#definition.RSM")Definition RSM@(/a)} & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt ℒS\kern -1.95872pt \left ({A}^{t},\kern 1.95872pt y\right )\text{ is consistent} & &\text{@(a href="#theorem.CSCS")Theorem CSCS@(/a)} & & & & } The augmented matrix \left [\left .{A}^{t}\kern 1.95872pt \right \vert \kern 1.95872pt y\right ] row reduces to \eqalignno{ \left [\array{ \text{1}&0&0&0\cr 0&\text{1 } &0 &0 \cr 0&0&\text{1}&0\cr 0&0 &0 &\text{1} } \right ] & & } and with a leading 1 in the final column Theorem RCLS tells us the linear system is inconsistent and so y∉ℛ\kern -1.95872pt \left (A\right ). C35 Contributed by Robert Beezer Statement [766] (a) By Theorem BCS we can row-reduce A, identify pivot columns with the set D, and “keep” those columns of A and we will have a set with the desired properties. \eqalignno{ A\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&−13&−19\cr 0&\text{1 } & 8 & 11 \cr 0&0& 0 & 0 } \right ] & & } So we have the set of pivot columns D = \left \{1,\kern 1.95872pt 2\right \} and we “keep” the first two columns of A, \eqalignno{ \left \{\left [\array{ 3\cr 1 \cr −3 } \right ],\kern 1.95872pt \left [\array{ 5\cr 2 \cr −4 } \right ]\right \} & & } (b) We can view the column space as the row space of the transpose (Theorem CSRST). We can get a basis of the row space of a matrix quickly by bringing the matrix to reduced row-echelon form and keeping the nonzero rows as column vectors (Theorem BRS). Here goes. \eqalignno{ {A}^{t}\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&−2\cr 0&\text{1 } & 3 \cr 0&0& 0\cr 0&0 & 0 } \right ] & & } Taking the nonzero rows and tilting them up as columns gives us \eqalignno{ \left \{\left [\array{ 1\cr 0 \cr −2 } \right ],\kern 1.95872pt \left [\array{ 0\cr 1 \cr 3 } \right ]\right \} & & } An approach based on the matrix L from extended echelon form (Definition EEF) and Theorem FS will work as well as an alternative approach. M10 Contributed by Robert Beezer Statement [769] Any vector from {ℂ}^{3} will lead to a consistent system, and therefore there is no vector that will lead to an inconsistent system. How do we convince ourselves of this? First, row-reduce E, E\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0&1\cr 0&\text{1 } &0 &1 \cr 0&0&\text{1}&1 } \right ] If we augment E with any vector of constants, and row-reduce the augmented matrix, we will never find a leading 1 in the final column, so by Theorem RCLS the system will always be consistent. Said another way, the column space of E is all of {ℂ}^{3}, C\kern -1.95872pt \left (E\right ) = {ℂ}^{3}. So by Theorem CSCS any vector of constants will create a consistent system (and none will create an inconsistent system). M20 Contributed by Robert Beezer Statement [770] The 2 × 2 matrix \left [\array{ 1 & 1\cr −1 &−1 } \right ] has C\kern -1.95872pt \left (A\right ) = N\kern -1.95872pt \left (A\right ) = \left \langle \left \{\left [\array{ 1\cr −1 } \right ]\right \}\right \rangle . M21 Contributed by Robert Beezer Statement [770] Begin with a matrix A (of any size) that does not have any zero rows, but which when row-reduced to B yields at least one row of zeros. Such a matrix should be easy to construct (or find, like say from Archetype A). C\kern -1.95872pt \left (A\right ) will contain some vectors whose final slot (entry m) is non-zero, however, every column vector from the matrix B will have a zero in slot m and so every vector in C\kern -1.95872pt \left (B\right ) will also contain a zero in the final slot. This means that C\kern -1.95872pt \left (A\right )\mathrel{≠}C\kern -1.95872pt \left (B\right ), since we have vectors in C\kern -1.95872pt \left (A\right ) that cannot be elements of C\kern -1.95872pt \left (B\right ). T40 Contributed by Robert Beezer Statement [771] Choose x ∈C\kern -1.95872pt \left (AB\right ). Then by Theorem CSCS there is a vector w that is a solution to ℒS\kern -1.95872pt \left (AB,\kern 1.95872pt x\right ). Define the vector y by y = Bw. We’re set, \eqalignno{ Ay & = A\left (Bw\right ) & &\text{Definition of $y$} & & & & \cr & = \left (AB\right )w & &\text{@(a href="fcla-jsmath-2.99li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & \cr & = x & &\text{$w$ solution to $ℒS\kern -1.95872pt \left (AB,\kern 1.95872pt x\right )$} & & & & \cr & & & & } This says that ℒS\kern -1.95872pt \left (A,\kern 1.95872pt x\right ) is a consistent system, and by Theorem CSCS, we see that x ∈C\kern -1.95872pt \left (A\right ) and therefore C\kern -1.95872pt \left (AB\right ) ⊆C\kern -1.95872pt \left (A\right ). For an example where C\kern -1.95872pt \left (A\right )⊈C\kern -1.95872pt \left (AB\right ) choose A to be any nonzero matrix and choose B to be a zero matrix. Then C\kern -1.95872pt \left (A\right )\mathrel{≠}\left \{0\right \} and C\kern -1.95872pt \left (AB\right ) = C\kern -1.95872pt \left (O\right ) = \left \{0\right \}. T41 Contributed by Robert Beezer Statement [771] From the solution to Exercise CRS.T40 we know that C\kern -1.95872pt \left (AB\right ) ⊆C\kern -1.95872pt \left (A\right ). So to establish the set equality (Definition SE) we need to show that C\kern -1.95872pt \left (A\right ) ⊆C\kern -1.95872pt \left (AB\right ). Choose x ∈C\kern -1.95872pt \left (A\right ). By Theorem CSCS the linear system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt x\right ) is consistent, so let y be one such solution. Because B is nonsingular, and linear system using B as a coefficient matrix will have a solution (Theorem NMUS). Let w be the unique solution to the linear system ℒS\kern -1.95872pt \left (B,\kern 1.95872pt y\right ). All set, here we go, \eqalignno{ \left (AB\right )w & = A\left (Bw\right ) & &\text{@(a href="fcla-jsmath-2.99li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & \cr & = Ay & &\text{$w$ solution to $ℒS\kern -1.95872pt \left (B,\kern 1.95872pt y\right )$} & & & & \cr & = x & &\text{$y$ solution to $ℒS\kern -1.95872pt \left (A,\kern 1.95872pt x\right )$} & & & & \cr & & & & } This says that the linear system ℒS\kern -1.95872pt \left (AB,\kern 1.95872pt x\right ) is consistent, so by Theorem CSCS, x ∈C\kern -1.95872pt \left (AB\right ). So C\kern -1.95872pt \left (A\right ) ⊆C\kern -1.95872pt \left (AB\right ). T45 Contributed by Robert Beezer Statement [771] First, 0 ∈N\kern -1.95872pt \left (B\right ) trivially. Now suppose that x ∈N\kern -1.95872pt \left (B\right ). Then \eqalignno{ ABx & = A(Bx) & &\text{@(a href="fcla-jsmath-2.99li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & \cr & = A0 & &x ∈N\kern -1.95872pt \left (B\right ) & & & & \cr & = 0 & &\text{@(a href="fcla-jsmath-2.99li31.html#theorem.MMZM")Theorem MMZM@(/a)} & & & & } Since we have assumed AB is nonsingular, Definition NM implies that x = 0. Second, 0 ∈C\kern -1.95872pt \left (B\right ) and 0 ∈N\kern -1.95872pt \left (A\right ) trivially, and so the zero vector is in the intersection as well (Definition SI). Now suppose that y ∈C\kern -1.95872pt \left (B\right ) ∩N\kern -1.95872pt \left (A\right ). Because y ∈C\kern -1.95872pt \left (B\right ), Theorem CSCS says the system ℒS\kern -1.95872pt \left (B,\kern 1.95872pt y\right ) is consistent. Let x ∈ {ℂ}^{n} be one solution to this system. Then \eqalignno{ ABx & = A(Bx) & &\text{@(a href="fcla-jsmath-2.99li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & \cr & = Ay & &\text{$x$ solution to $ℒS\kern -1.95872pt \left (B,\kern 1.95872pt y\right )$} & & & & \cr & = 0 & &y ∈N\kern -1.95872pt \left (A\right ) & & & & } Since we have assumed AB is nonsingular, Definition NM implies that x = 0. Then y = Bx = B0 = 0. When AB is nonsingular and m = n we know that the first condition, N\kern -1.95872pt \left (B\right ) = \left \{0\right \}, means that B is nonsingular (Theorem NMTNS). Because B is nonsingular Theorem CSNM implies that C\kern -1.95872pt \left (B\right ) = {ℂ}^{m}. In order to have the second condition fulfilled, C\kern -1.95872pt \left (B\right ) ∩N\kern -1.95872pt \left (A\right ) = \left \{0\right \}, we must realize that N\kern -1.95872pt \left (A\right ) = \left \{0\right \}. However, a second application of Theorem NMTNS shows that A must be nonsingular. This reproduces Theorem NPNT.
2019-06-25 10:17:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9889722466468811, "perplexity": 2405.5494320047196}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999817.30/warc/CC-MAIN-20190625092324-20190625114324-00232.warc.gz"}
https://www.mathway.com/examples/finite-math/matrices/finding-the-adjoint?id=600
# Finite Math Examples This is the adjoint of the original matrix. Transpose the cofactor matrix to find the adjoint. Transpose the matrix by moving element in the original matrix to element in the transposed matrix. Transpose the matrix by moving element in the original matrix to element in the transposed matrix. Transpose the matrix by moving element in the original matrix to element in the transposed matrix. Transpose the matrix by moving element in the original matrix to element in the transposed matrix. Transpose the matrix by turning all rows in original matrix to columns in the transposed matrix. We're sorry, we were unable to process your request at this time Step-by-step work + explanations •    Step-by-step work •    Detailed explanations •    Access anywhere Access the steps on both the Mathway website and mobile apps $--.--/month$--.--/year (--%)
2018-02-22 20:13:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28606608510017395, "perplexity": 2218.755524662371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814290.11/warc/CC-MAIN-20180222200259-20180222220259-00142.warc.gz"}
https://discourse.pymc.io/t/compounds-steps/954
# Compounds steps I am having trouble to understand how does the iteration are made with different compounds steps. I’m running the model explained in: And it’s using the NUTS sampler for the continuous variable and CategoricalGibbsMetropolis sampler for the two discrete variables. The only explanation I could find in the documentation is: “sampling proceeds by first applying step1 then step2 at each iteration.” I don’t really understand how the two sampler could be used at each iteration. and an open issue on github that raises the issue that compound step in sampling is not explained in the documentation: From what I can see when I plot the step_size_bar, I understand it as it runs all the iteration using the first sampler and then run all the iterations again using the second sampler: n = aCH_.eval().shape[1] with pm.Model() as basic_model: # Priors for unknown model parameters b1 = pm.Uniform('b1', lower=0.3, upper=0.5, testval=0.45) ncomp_aCH = pm.Categorical('ncomp_aCH', p=np.ones(n)/n) ncomp_aCOH = pm.Categorical('ncomp_aCOH', p=np.ones(n)/n) aCH=aCH_[0, ncomp_aCH] aCOH=aCOH_[0, ncomp_aCOH] out= b1*aCH+aCOH # Likelihood (sampling distribution) of observations Y_obs = pm.Normal('Y_obs', mu=out, tau=sigma, observed=Y) trace = pm.sample(2000000, progressbar=True) plt.plot(trace['step_size_bar']) plt.show() Does anyone have any other informations on how does compound step sampling works? 1 Like Yeah that part is not explained very well in the doc, that’s why I opened the issue there… In brief, when Compound steps are involved, it takes a list of step to generate a list of methods. So for example if you do with pm.Model() as m: rv1 = ... ... step1 = pm.Metropolis([rv1, rv2]) step2 = pm.CategoricalGibbsMetropolis([rv3]) trace = pm.sample(..., step=[step1, step2]...) The Compound step is now contain a list of methods: And at each sample, it iterates each method, which takes a point as input, and generates a new point as output. The new point is proposed within each step via a stochastic kernel, and if the proposal was rejected by MH criteria it just outputs the original input point Take a simple example: n_=theano.shared(np.asarray([10, 15])) with pm.Model() as m: p = pm.Beta('p', 1., 1.) ni = pm.Bernoulli('ni', .5) k = pm.Binomial('k', p=p, n=n_[ni], observed=4) Now specify the step: with m: step1 = pm.Metropolis([m.free_RVs[0]]) step2 = pm.BinaryGibbsMetropolis([ni]) And now you can pass a point to the step, and see what happens: point = m.test_point point # {'ni': array(0), 'p_logodds__': array(0.)} point, state = step1.step(point=point) point, state # ({'ni': array(0), 'p_logodds__': array(0.69089502)}, # [{'accept': 0.8832003265520174, 'tune': True}]) as you can see, the value of ni does not change, but p_logodds__ is updated. And similarly, you can get a sample using the step2: # (notice that there is no generates_stats, so only output the point here.) point = step2.step(point=point) point # {'ni': array(0), 'p_logodds__': array(0.69089502)} Compound step works exactly like this by iterating all the steps within the list. In effect, it is a metropolis hastings within gibbs sampling. 1 Like I hope this clarifies a bit - it took me sometimes to understand the details as well. Also, it would be a welcome contribution if you extend this into a doc I can surely try. However this is not clear to me, but that’s probably because I focused on understanding the standard Metropolis-Hasting algorithm with a random walk proposal until now and I haven’t take the time to fully understand Gibbs sampling. I’m going to take sometime to read some references on Gibbs and on NUTS and will come back to you either if I still don’t understand or if I am confident enough in my understanding to write a doc. 1 Like Ok I think I got it. So it’s basically Gibbs sampling where a sample from the conditionnal distribution is generated using in your example one iteration of: Metropolis for parameter p and Binary Gibbs Metropolis for ni ? But of course the Metropolis/Binary Gibbs metropolis can refuse the proposed new parameter and no change for that parameter occured. When the steps are automatically attributed to parameters, how do we know in which order does the two steps take place? Is it a random order at each iteration or a fixed one? Exactly. Notice that it is not exactly Gibbs sampling as it does not generate from a conditional probability. More precisely it updates in a Gibbs like fashion where the accept-reject is based on comparing the ratio of the conditional logp with p \sim \text{Uniform}(0, 1) The order follows the same order of the RVs when it is assigned automatically. But if you specify the step you can change that order as well: with m: comp_step1 = pm.CompoundStep([step1, step2]) comp_step1.methods [<pymc3.step_methods.metropolis.Metropolis at 0x7fcfbeae7be0>,
2022-06-30 16:04:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5778281092643738, "perplexity": 2109.7512888426922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103850139.45/warc/CC-MAIN-20220630153307-20220630183307-00481.warc.gz"}
https://www.nature.com/articles/s41598-017-18410-x?error=cookies_not_supported&code=bbe33624-ae65-4d11-84de-b0b4fb6f0870
# Dial-in Topological Metamaterials Based on Bistable Stewart Platform ## Abstract Recently, there have been significant efforts to guide mechanical energy in structures by relying on a novel topological framework popularized by the discovery of topological insulators. Here, we propose a topological metamaterial system based on the design of the Stewart Platform, which can not only guide mechanical waves robustly in a desired path, but also can be tuned in situ to change this wave path at will. Without resorting to any active materials, the current system harnesses bistablilty in its unit cells, such that tuning can be performed simply by a dial-in action. Consequently, a topological transition mechanism inspired by the quantum valley Hall effect can be achieved. We show the possibility of tuning in a variety of topological and traditional waveguides in the same system, and numerically investigate key qualitative and quantitative differences between them. We observe that even though both types of waveguides can lead to significant wave transmission for a certain frequency range, topological waveguides are distinctive as they support robust, back scattering immune, one-way wave propagation. ## Introduction The discovery of topological insulators1,2, which are labelled as a new state of matter in the condensed matter physics, has galvanized research efforts in multiple fields. Topological framework enables us to understand fascinating phenomena such as quantum Hall effect (QHE)3,4, quantum spin Hall effect (QSHE)5, and quantum valley Hall effect (QVHE)6. These are special because of the intriguing directional and robust edge states they support. It is recent that these fundamental concepts have been extended to the fields of photonics7, acoustics8,9,10,11,12,13,14, and mechanics15,16,17,18,19,20,21,22,23,24,25,26. Such extension is of the fundamental interest to the research community as it has potential to set new design principles for topological metamaterials that aim to strategically tailor energy transport for waveguiding, isolating, switching, filtering, and related applications. Acoustic and mechanical metamaterials that rely on QHE need active components (e.g., gyroscope and flow circulator) or applications of external fields (e.g., magnetic field) to break the time reversal symmetry8,9,10,16,17,18,19. This adds complexity to the system, and thus, these metamaterials based on QHE are challenging to be realized in practical environments. QSHE-inspired metamaterials employ only passive components, but usually mandate intricate ways to achieve a double Dirac cone in their dispersion relation11,12,20,21,22,23. Metamaterials based on QVHE13,14,24,25,26, however, rely on the breakage of the inversion symmetry to achieve topological properties, and these can be comparably easier to be designed and realized in practical settings. QVHE originates from the newly discovered valley degree of freedom (DOF) of electrons in the two-dimensional honeycomb lattice of graphene6. These valley DOFs are energetically degenerate but are largely separated in momentum space13,14. Due to this large separation, inter-valley scattering can be avoided, and valley DOFs constitute two pseudo-spins, which have opposite-directional and robust properties on the topological interfaces. On this principle, back-scattering immune and robust energy transport along topological waveguides with sharp bends have been proposed in acoustic13,14 and mechanical25,26 systems. Though the aforementioned configurations have shown to be guiding energy in a specific path in the system, there remains a challenge; Can one change the path in situ and thus achieve a complete control over the waveguide? Once achieved, this will provide a fertile testbed for future experiments related to wave guiding capability of various types of topological interfaces, and at the same time those can be compared with traditional waveguides in the same system more closely than ever. However, adding such versatility into the system comes at a cost. Generally, in situ tunability requires complex components or mechanisms to be present in the system, so that the wave path in the lattice structure can be reconfigured in a controllable and versatile manner. But such complexity in design could again make the system cumbersome for practical use, and it would defeat the purpose of building a simple QVHE-based system to some extent. Here we show that in situ tunability in the QVHE-based mechanical metamaterial can be achieved by utilizing nonlinearity of the constituent elements in the system. More specifically, we use an assembly of the Stewart Platform (SP), in which translation and rotational degrees of freedoms of each SP are judiciously tailored to achieve a bistable response. Consequently, a simple dial-in action changes its configuration from one stable state to the other, and this feature can be used for in situ control of the wave path in the system. The SP already has a wide range of engineering applications such as vibration control, precise positioning, and flight simulation27,28. Therefore, by integrating the elegant engineering of SP with the fascinating physics of QVHE, we propose a dial-in mechanical metamaterial for creating tunable topological waveguides. We use extensive numerical simulations and show that this metamaterial can be tuned in situ to design a variety of topological waveguides with robust wave propagation characteristics. The tunability allows us to also build traditional waveguides by suppressing topological variations in the same systems, making it possible to compare their performance with the topological counterpart to a remarkable detail. Such a comparison therefore plays a key role in extending our knowledge and appreciation towards the uniqueness of topological waveguides in the proposed system. ## Results ### Design of the tunable topological metamaterial The tunable system we propose is illustrated in Fig. 1. Each SP is bistable, i.e., it has two stable configurations: Y-state (yellow) and P-state (purple) as shown in Fig. 1a,b. This unit is made of two parallel disks connected with six linear springs. The conventional SP unit has six DOFs for the top disk, while the bottom disk is fixed27. However, by judiciously choosing the connecting springs, one can decouple some DOFs and reduce the total DOFs (see Supplementary Note 1). In this study, for the sake of simplicity, we assume that the bottom disk in pinned at its center, such that it can only rotate about the z-direction. We denote this rotational DOF of the bottom disk by ϕ b. The top disk can have only rotational (ϕ t) and translational (w t) motions along the z-direction with respect to its equilibrium position. Note that these dynamic perturbation parameters, ϕ b, ϕ t, and w t, should not be confused with θ 0 and h 0, which denote the equilibrium parameters of the SP unit cell and vary depending on whether it is in the Y- or P-state (Fig. 1a). All three DOFs of these dynamic motions in terms of ϕ b, ϕ t, and w t are governed by the six springs between the plates. The detailed mathematical relationships – including the derivations of the bistability in the SP unit cell – are described in Supplementary Figure 1 and Note 2. We design a hexagonal lattice by combining two stable states, such that the system breaks C 6 symmetry but retains C 3 symmetry (Fig. 1c). Only the bottom disk of each SP is connected with neighbouring SPs with springs (indicated with a torsional spring coefficient kcc). Note that this connection is a reverse spring, i.e., it induces opposite torque in the connected units (a similar setup can be found in a recent work22). Tunability comes from the fact that one can easily change the stable state of each SP—independently—to achieve a desired lattice configuration. ### Band-inversion and topology In this section, we evaluate dispersion characteristics of the system and observe a topological transition. For describing the dynamics of the (infinite) hexagonal lattice, we choose a periodic unit cell (highlighted in Fig. 1c), which consists of two SPs indexed as 1 and 2. Here, each SP can take either Y- or P-state by dial-in actions. Since a single SP has three DOFs, the unit cell would be represented by the following six parameters: [ϕ (1)b, ϕ (2)b, ϕ (1)t, ϕ (2)t, w (1)t, w (2)t]. First, we write the equations of motion for the periodic unit cell (index as i, j) as: $$I{\ddot{\varphi }}_{i,j}^{\mathrm{(1)}b}+{k}_{cc}(3{{\varphi }}_{i,j}^{\mathrm{(1)}b}+{{\varphi }}_{i,j}^{\mathrm{(2)}b}+{{\varphi }}_{i,j-1}^{\mathrm{(2)}b}+{{\varphi }}_{i-\mathrm{1,}j}^{\mathrm{(2)}b})-{k}_{{\varphi }w}^{\mathrm{(1)}}{w}_{i,j}^{\mathrm{(1)}t}+{k}_{{\varphi }{\varphi }}^{\mathrm{(1)}}({{\varphi }}_{i,j}^{\mathrm{(1)}b}-{{\varphi }}_{i,j}^{\mathrm{(1)}t})=0$$ (1a) $$I{\ddot{\varphi }}_{i,j}^{\mathrm{(1)}t}+{k}_{{\varphi }w}^{(1)}{w}_{i,j}^{\mathrm{(1)}t}+{k}_{{\varphi }{\varphi }}^{\mathrm{(1)}}({{\varphi }}_{i,j}^{\mathrm{(1)}t}-{{\varphi }}_{i,j}^{\mathrm{(1)}b})=0$$ (1b) $$m{\ddot{w}}_{i,j}^{(1)t}+{k}_{ww}^{(1)}{w}_{i,j}^{(1)t}+{k}_{w{\varphi }}^{(1)}({{\varphi }}_{i,j}^{(1)t}-{{\varphi }}_{i,j}^{(1)b})=0$$ (1c) $$I{\ddot{\varphi }}_{i,j}^{(2)b}+{k}_{cc}(3{{\varphi }}_{i,j}^{(2)b}+{{\varphi }}_{i,j}^{(1)b}+{{\varphi }}_{i,j+1}^{(1)b}+{{\varphi }}_{i+\mathrm{1,}j}^{(1)b})-{k}_{{\varphi }w}^{(2)}{w}_{i,j}^{(2)t}+{k}_{{\varphi }{\varphi }}^{(2)}({{\varphi }}_{i,j}^{(2)b}-{{\varphi }}_{i,j}^{(2)t})=0$$ (2a) $$I{\ddot{\varphi }}_{i,j}^{(2)t}+{k}_{{\varphi }w}^{(2)}{w}_{i,j}^{(2)t}+{k}_{{\varphi }{\varphi }}^{(2)}({{\varphi }}_{i,j}^{(2)t}-{{\varphi }}_{i,j}^{(2)b})=0$$ (2b) $${m}{\ddot{{w}}}_{i,j}^{(2)t}+{k}_{ww}^{(2)}{w}_{i,j}^{(2)t}+{k}_{w{\varphi }}^{(2)}({{\varphi }}_{i,j}^{(2)t}-{{\varphi }}_{i,j}^{(2)b})=0$$ (2c) where I and m are the rotational inertia and the mass of the disks, respectively. k ww , k ϕϕ , k , and k ϕw are the stiffness coefficients for the relative translation and rotations, and their coupling. The detailed expressions of these coefficients as a function of k 1, k 2, and geometric parameters are described in Supplementary Note 1. For a lattice length of a, we invoke the Bloch’s theorem by using the periodicity in two directions: a 1 = [1, 0]a and $${{\bf{a}}}_{2}=[\mathrm{1/2},\sqrt{3}\mathrm{/2}]a$$, and obtain the following eigenvalue problem: $${\omega }^{2}{\bf{MU}}={\bf{BU}}$$ (3) where ω is the angular frequency, and the generalized eigenvector $${\bf{U}}=[{{\varphi }}_{i,j}^{\mathrm{(1)}b},{{\varphi }}_{i,j}^{\mathrm{(2)}b},{{\varphi }}_{i,j}^{\mathrm{(1)}t},{{\varphi }}_{i,j}^{\mathrm{(2)}t},{w}_{i,j}^{\mathrm{(1)}t},{w}_{i,j}^{\mathrm{(2)}t}]$$. B and M are the stiffness and mass matrices, respectively (see the detailed expressions in Supplementary Note 3). There are two possible configurations of the C 3 symmetric hexagonal unit cell: Y-P configuration and P-Y configuration. Figure 2 displays the dispersion properties and associated band-inversion when one state is transformed to the other. Due to the breakage of the inversion symmetry, a complete band gap emerges at the K point between the fifth and the sixth bands (see the highlighted red and blue bands in Fig. 2a,b. For details about Hamiltonian analysis of the system, see Supplementary Note 3). Note that these bands would have a degeneracy at the K point if the inversion symmetry is not broken (see the blue dashed lines). Although the dispersion curves for these two Y-P and P-Y configurations look similar, there is a difference in terms of the topology. The highlighted bands are inverted in these configurations — so-called band-inversion. We verify this by plotting in the left columns of Fig. 2c,d the mode shapes of the unit cell corresponding to the points K 1 (744.5 Hz) and K 2 (839.8 Hz) marked in the dispersion relations. For the Y-P configuration in Fig. 2c, we see that the low frequency vibration (K 1 point) corresponds to the case when only the P-state is vibrating in the lattice. In Fig. 2d, the P-state vibrates in the P-Y configuration at the low frequency in the same way. This makes sense as the designed stiffness for the P-state is lower than that of the Y-state (see Supplementary Note 2). However, there is an inversion in terms of where in the unit cell the vibration is dominant due to the filpping of the Y- and P-states. The aforementioned band-inversion resembles the one seen in the valley Hall effect. We can calculate the valley Chern number in order to track the topological transition associated with this effect. This is achieved by integrating its Berry curvature over half of the first Brillouin zone. Mathematically, the valley Chern number $${C}^{K(K^{\prime} )}=(\mathrm{1/2}\pi )\iint \Omega ({\bf{k}})dk$$, where the Berry curvature $${\Omega }({\bf{k}})=i{\sum }_{v=\mathrm{1,}v\ne u}^{6}[\langle {\bf{u}}|\partial {\bf{B}}/\partial {{k}}_{x}|{\bf{v}}\rangle \langle {\bf{v}}|\partial {\bf{B}}/\partial {k}_{y}|{\bf{u}}\rangle -c\mathrm{.}c\mathrm{.}]/$$ $${({\omega }_{v}^{2}-{\omega }_{u}^{2})}^{2}$$ 25. Here, i is the imaginary unit, c.c. denotes complex conjugate, 〈·|·〉 represents the inner product, k = [k x k y ] is the wave vector, and ω u and ω v are the angular frequencies corresponding to the normalized eigen vectors u and v, respectively. The calculated Berry curvature in the first Brillouin zone is shown in the right columns of Fig. 2c,d. One notices that it is localized at K and K′ points in the Brillouin zone, and it changes its sign as we alter from the Y-P to the P-Y configuration—reflecting the band-inversion process. Therefore, the calculated valley Chern numbers C K (or C K′) for the fifth and the sixth bands of the Y-P configuration (highlighted red and blue bands in Fig. 2a) are −1/2 and 1/2 (or 1/2 and −1/2), respectively. These are reversed for the P-Y configuration, and this confirms that the two configurations are topologically distinct. The quantized difference of the valley Chern numbers of the two configurations, i.e., $$|{C}_{{\rm{Y}}-{\rm{P}}}^{K}-{C}_{{\rm{P}}-{\rm{Y}}}^{K}|=1$$, indicates the emergence of a topologically protected edge state at the interface if these configurations are placed adjacently. ### Topological defect and its manipulation In this section, we will show how a topological defect can be created by placing topologically distinct lattices, P-Y (defined hereafter to be type-I for the sake of simplicity) and Y-P (type-II), adjacently. We will also show that this topological defect can easily be reconfigured to any other shapes—thanks to the extreme tunablity of the system. First, we confirm the existence of topologically protected interface modes for a linear topological defect (see Fig. 3a). To this end, we take a supercell of size 1 × 20 with the top and the bottom boundaries fixed, and we apply the periodic boundary condition in the x-direction. The resulting dispersion of the supercell strip is plotted in Fig. 3b. We notice several bulk modes (curves in black color). These correspond to the bulk bands seen in the unit cell analysis done earlier (Fig. 2a,b). However, we also note some additional modes (dashed red and blue curves in Fig. 3b). In blue color are the two overlapping modes that appear at the top and the bottom of the supercell due to the identical boundaries on both sides. However, there is a distinguished mode (red color) inside the band gap. This corresponds to the interface mode emerging due to the distinct topological nature of the lattices I and II. Additionally, to the extreme top of the dispersion curve, we do observe another interface mode (red color) outside the cutoff frequency. Now that we have shown the framework of creating a topological defect, it is also possible to achieve various complex shapes of the topological interfaces in a 2D lattice by strategic dial-in actions of the SP cells. In Fig. 3c–e, we assemble some of these shapes with various bends, showcasing the manipulation capability in the system. Below are the plots, obtained from the numerical experiments (see methods) for a harmonic excitation at 760 Hz applied to the 40 × 40 lattice. This frequency excitation, being inside the band gap, excites the topologically protected mode, and thus demonstrates a robust energy transport through various interfaces. It is now natural to ask — in what ways the topological waveguide different from a traditional waveguide? Here, by a traditional waveguide, we refer to a waveguide that is created without incurring topological disparities in the same system. One way to create such a waveguide is by introducing a topologically trivial defect along the desired wave path so as to utilize the localized defect modes for wave transmission29. Figure 4a,b show the exemplary cases of topological and traditional waveguides, respectively, realized in the same SP settings. Note that given the initially uniform C 3 symmetric hexagonal structure, we can achieve the creation of the traditional defect simply by transforming P-states into Y-states (i.e., in situ dial-in action) along the desired wave path (see the inset of Fig. 4b). However, the creation of the topological defects in the originally uniform lattice requires more actions, since it necessitates the border between the type-I and the type-II lattices. That is, we need to convert the one side of the SP cells (i.e., the upper side of the lattice with respect to the wave path in Fig. 4a) from the type-I to the type-II lattice, which demands the whole flipping (i.e., dial-in action) of the SP cells from the Y- to P-state and vice versa. The analogous operation in 1D lattice systems has been investigated by Chaunsali et al.24. The advantage of our system is that it can realize both types of traditional and topological waveguides in the same system by leveraging the bistable SP network and the in situ dial-in action on it. This provides an excellent opportunity to compare their transmission properties. We perform an eigen analysis on a 40 × 40 structure (with 9600 DOF in total) for both types of waveguide structures. In Fig. 4c,d, we plot the eigen frequencies against the modal order for the topological and traditional waveguides, respectively. The blue and yellow curves correspond to the bulk bands, highlighted in the unit cell dispersion in Fig. 2a,b. Sandwiched between these two branches are the modes localized at the waveguide interfaces (red color, as magnified in the insets). We observe that the topological waveguide ensures that the entire band gap is populated by the interface modes. This affirms our observations in the previous sections where a robust transmission was shown along topological interfaces. The traditional waveguide, however, does not lead to the modes spanned over the entire band gap (Fig. 4d)—indicating its limitations in terms of constructing a wide range of defect modes. For the corresponding eigen shapes, see Supplementary Movies 1 and 2. The aforementioned characteristics provide us with a deep insight into the differences of the topological and traditional waveguides, and these are now used to explain different transmission spectra along the waveguide channels. We perform numerical simulations and calculate the transmission spectra for a range of input frequencies (see methods). In Fig. 4e, we plot the obtained transmission. We can immediately notice a clear difference in the transmission inside the band gap (i.e., 744.5 Hz to 839.8 Hz). The topological waveguide leads to a superior transmission all along the band gap, however, the traditional waveguide yields a significant transmission only in a small range of frequencies (Δf). This transmission limited to a small frequency window is due to the presence of the defect modes, previously shown in Fig. 4d. Even though the topological waveguide shows a superior transmission overall for the frequencies inside the band gap, we investigate if it has any qualitative differences from the traditional waveguide for the small range of frequencies (Δf in Fig. 4e). To answer this question, we perform a numerical experiment on the waveguides by sending a 50 ms long Gaussian packet at a frequency (757 Hz) lying inside the range of interest Δf. In Fig. 5, we compare the transient responses for topological and traditional waveguides. We observe that the topological waveguide in Fig. 5a shows that the wave packet does not back scatter at the multiple sharp bends, and it smoothly travels along the path. However, the traditional waveguide in Fig. 5b indicates the presence of scattering around the bends. Note that the wave packet still propagates along the path and would lead to a significant transmission as shown in Fig. 4e. Nonetheless, it is qualitatively different from the topological waveguide in terms of guiding a wave packet by allowing back scattering around bends. The difference of wave transmission efficiency between the traditional and topological waveguides is further investigated systematically for bends of various angles in Supplementary Figures 2 4 and Note 4. ### One-way waveguide Lastly, we demonstrate that a topological waveguide constructed through our system can support one-way wave propagation if a valley-selective excitation is given at the source. To this end, we first extract the amplitude and phase information of the topologically protected mode from the supercell analysis done earlier (Fig. 3a,b). In Fig. 6a,b, we calculate the amplitude ratio (i.e., A 1/A 2) and phase difference (φ 1 − φ 2) of the bottom disks’ rotations between the Y-and P-states at the topological interface (see insets), and plot them as a function of reduced wavevector (k x ) in the bandgap range. We observe that the trend can be categorized into two valleys. One corresponds to the forward propagation (data points indicated with the subscript F in Fig. 6a,b), and the other to the backward propagation (subscript B). These can be verified by looking at the slope (i.e., group velocity) of the protected mode (red color) in Fig. 3b. In this way, appropriate amplitude ratio and phase difference can be applied (i.e., valley-selection at either K or K′) to the disks at the interface to excite either forward or backward propagating waves. This is confirmed by a transient analysis performed at 778 Hz in Fig. 6c. If the lattice is excited in the middle point (marked with a star) with the amplitude of A 1/A 2 = 1.748 and the phase difference of φ 1 − φ 2 = 0.38π, we observe a forward propagating wave (left panel in Fig. 6c). However, if the excitation with the same amplitude but opposite phase (i.e., φ 1 − φ 2 = −0.38π) is applied, we observe a backward propagating wave (right panel). See Supplementary Movies 3 and 4 for more details. These numerical results attest that the bistable SP-based metamaterial system proposed in this study allows not only the in situ manipulation of wave paths via dial-in actions, but also a selective one-way propagation of mechanical waves by strategic excitations. ## Discussion Here, we propose a dial-in topological metamaterial system based on the bistable Stewart Platform (SP) and report robust one-way propagation of mechanical waves in it along tailorable wave paths. By arranging the bistable SP cells hexagonally in an alternating fashion, we can create two types of topologically distinctive lattices, which can be transformed to each other simply by a dial-in action. We prove that this transformation changes the topology of the system—quantified by the valley Chern numbers. When lattices with these two topologically distinct configurations are placed adjacently, we show the existence of a topologically protected mode at the interface. This idea is extended to tune the system in situ to create variety of waveguides, and we demonstrate a robust energy propagation along them using numerical simulations. We conduct eigenmode analysis and transient simulations of finite structures to highlight some key differences between topological and traditional waveguides in the same system. While the traditional waveguides also lead to significant wave transmission due to interfacial local modes inside the band gap, the edge modes generated in the topological waveguides are qualitatively different in that they support a wider range of frequencies and are immune to back scattering at sharp bends in the structure. We also show the strategy of giving valley-selective excitation to the system such that one-way wave propagation is achieved along the waveguide. Therefore, this tunable system opens up possibilities to realize various complex shapes of topological waveguides without resorting to external fields or adding/removing the masses from the system. Further studies including the experimental verification of the proposed tunable metamaterials would be reported in authors’ future publications. ## Methods ### Numerical experiment We employ the Runge-Kutta method (step size =10−4 s) to get the response at any time instant for a variety of input signals: harmonic, Gaussian pulse, and sinusoidal frequency sweep. ### Calculation of the transmission spectrum We perform a numerical experiment and calculate the transmission spectra. A sweep frequency signal (20 Hz to 1000 Hz in 5 s) is used as a rotational perturbation applied to the bottom disk at the input location. The transient rotation of the bottom disk is measured at the output location. Thus, the transmission $$T(\omega )=20\,\mathrm{log}[{\Phi }_{output}^{b}(\omega )/{\Phi }_{input}^{b}(\omega )]$$, where Φ(ω) represents the power spectral density (PSD) of the transient rotation ϕ(t) of the disk. ### Data availability The data that support the findings of this study are available from the corresponding author upon request. ## References 1. 1. Hasan, M. Z. & Kane, C. L. Colloquium: topological insulators. Rev. Mod. Phys. 82, 3045 (2010). 2. 2. Qi, X. L. & Zhang, S. C. Topological insulators and superconductors. Rev. Mod. Phys. 83, 1057–1110 (2011). 3. 3. Thouless, D. J. Quantization of particle transport. Phys. Rev. B 27, 6083–6087 (1983). 4. 4. Klitzing, K. The quantized Hall effect. Quantization of particle transport. Rev. Mod. Phys. 58, 519 (1986). 5. 5. Kane, C. L. & Mele, E. J. Quantum spin Hall effect in graphene. Phys. Rev. Lett. 95, 226801 (2005). 6. 6. Xiao, D., Yao, W. & Niu, Q. Valley-contrasting physics in graphene: magnetic moment and topological transport. Phys. Rev. Lett. 99, 236809 (2007). 7. 7. Lu., L., Joannopoulos, J. D. & Soljačić, M. Topological photonics. Nat. Photon. 8, 821–829 (2014). 8. 8. Khanikaev, A. B., Fleury, R., Mousavi, S. H. & Alù, A. Topologically robust sound propagation in an angular-momentum-biased graphene-like resonator lattice. Nat. Commun. 6, 8260 (2015). 9. 9. Yang, Z. et al. Topological acoustics. Phys. Rev. Lett. 114, 114301 (2015). 10. 10. Chen, Z. G. & Wu, Y. Tunable topological phononic crystals. Phys. Rev. Appl. 5, 054021 (2016). 11. 11. He, C. et al. Acoustic topological insulator and robust one-way sound transport. Nat. Phys. 12, 1124–1129 (2016). 12. 12. Zhang, Z. et al. Topological Creation of Acoustic Pseudospin Multipoles in a Flow-Free Symmetry-Broken Metamaterial Lattice. Phys. Rev. Lett. 118, 084303 (2017). 13. 13. Lu, J., Qiu, C., Ke, M. & Liu, Z. Valley vortex states in sonic crystals. Phys. Rev. Lett. 116, 093901 (2016). 14. 14. Lu, J. et al. Observation of topological valley transport of sound in sonic crystals. Nat. Phys. 13, 369–374 (2017). 15. 15. Huber, S. D. Topological mechanics. Nat. Phys. 12, 621–623 (2016). 16. 16. Nash, L. M. et al. Topological mechanics of gyroscopic metamaterials. Proc. Natl. Acad. Sci. USA 112, 14495–14500 (2015). 17. 17. Wang, P., Lu, L. & Bertoldi, K. Topological phononic crystals with one-way elastic edge waves. Phys. Rev. Lett. 115, 104302 (2015). 18. 18. Chaunsali, R., Li, F. & Yang, J. Stress Wave Isolation by Purely Mechanical Topological Phononic Crystals. Sci. Rep. 6, 30662 (2016). 19. 19. Ong, Z. Y. & Lee, C. H. Transport and localization in a topological phononic lattice with correlated disorder. Phys. Rev. B 94, 134203 (2016). 20. 20. Süsstrunk, R. & Huber, S. D. Observation of phononic helical edge states in a mechanical topological insulator. Science 349, 47–50 (2015). 21. 21. Mousavi, S. H., Khanikaev, A. B. & Wang, Z. Topologically protected elastic waves in phononic metamaterials. Nat. Commun. 6, 8682 (2015). 22. 22. Pal, R. K., Schaeffer, M. & Ruzzene, M. Helical edge states and topological phase transitions in phononic systems using bi-layered lattices. J. Appl. Phys. 119, 084305 (2016). 23. 23. Chaunsali, R., Chen, C. W. & Yang, J. Subwavelength and directional control of flexural waves in plates using topological waveguides. arXiv preprint arXiv 1708, 07994 (2017). 24. 24. Chaunsali, R., Kim, E., Thakkar, A., Kevrekidis, P. G. & Yang, J. Demonstrating an in situ topological band transition in cylindrical granular chains. Phys. Rev. Lett. 119, 024301 (2017). 25. 25. Pal, R. K. & Ruzzene, M. Edge waves in plates with resonators: an elastic analogue of the quantum valley Hall effect. New J. Phys. 19, 025001 (2017). 26. 26. Liu, T. W. & Semperlotti, F. Acoustic Valley-Hall Edge States in phononic elastic waveguides. arXiv preprint arXiv 1708, 02987 (2017). 27. 27. Wu, Y., Yu, K., Jiao, J. & Zhao, R. Dynamic modeling and robust nonlinear control of a six-DOF active micro-vibration isolation manipulator with parameter uncertainties. Mech. Mach. Theory 92, 407–435 (2015). 28. 28. Dasgupta, B. & Mruthyunjaya, T. S. The Stewart platform manipulator: a review. Mech. Mach. Theory 35, 15–40 (2000). 29. 29. Khelif, A., Choujaa, A., Benchabane, S., Djafari-Rouhani, B. & Laude, V. Guiding and bending of acoustic waves in highly confined phononic crystal waveguides. Appl. Phys. Lett. 84, 4400–4402 (2004). ## Acknowledgements The authors are grateful to Rui Zhu, Linyun Yang, and Xiaotian Shi for fruitful discussions. R.C., H.Y., and J.Y. are grateful for the financial support from the U.S. National Science Foundation (CAREER-1553202 and EFRI-1741685). Y.W. and K.Y. acknowledge the support from the Harbin Institute of Technology and the China Scholarship Council (Grant No. 201606120065). ## Author information Y.W., J.Y., and K.Y. conceived the original idea. Y.W. performed the numerical simulations. H.Y. calculated the bistable characteristic of SP. Y.W. and R.C. analyzed the results and wrote the manuscript. J.Y. and K.Y. supervised the project. Correspondence to Kaiping Yu or Jinkyu Yang. ## Ethics declarations ### Competing Interests The authors declare that they have no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Wu, Y., Chaunsali, R., Yasuda, H. et al. Dial-in Topological Metamaterials Based on Bistable Stewart Platform. Sci Rep 8, 112 (2018) doi:10.1038/s41598-017-18410-x • ### Inverse design of quantum spin hall-based phononic topological insulators • S.S. Nanthakumar • , Xiaoying Zhuang • , Harold S Park • , Chuong Nguyen • , Yanyu Chen •  & Timon Rabczuk Journal of the Mechanics and Physics of Solids (2019) • ### Polarization-dependent and valley-protected Lamb waves in asymmetric pillared phononic crystals • Wei Wang • , Bernard Bonello • , Bahram Djafari-Rouhani •  & Yan Pennec Journal of Physics D: Applied Physics (2019) • ### Valley Chern Effect with LC Resonators: A Modular Platform • Yishai Eisenberg • , Yafis Barlas •  & Emil Prodan Physical Review Applied (2019) • ### Design of GaAs-based valley phononic crystals with multiple complete phononic bandgaps at ultra-high frequency • Ingi Kim • , Yasuhiko Arakawa •  & Satoshi Iwamoto Applied Physics Express (2019) • ### Observation of topological edge states of acoustic metamaterials at subwavelength scale • Hongqing Dai • , Junrui Jiao • , Baizhan Xia • , Tingting Liu • , Shengjie Zheng •  & Dejie Yu Journal of Physics D: Applied Physics (2018)
2019-12-08 18:13:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.654628574848175, "perplexity": 1637.9695357990552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540514475.44/warc/CC-MAIN-20191208174645-20191208202645-00131.warc.gz"}
https://fr.maplesoft.com/support/help/maple/view.aspx?path=type%2Flocal
local - Maple Help type/local check for a local variable Calling Sequence type(x, local) type(x, local(t)) Parameters x - any expression t - type Description • The call type(x,local) returns true if x is a local variable and false otherwise. • More precisely, it returns true if x is a symbol, and is not equal to the global variable with the same name. This includes module exports, but not environment variables. • The name local is a keyword and therefore it must be enclosed by backquotes in a call to type. • If the parameter t is included, it will check that x is assigned something of that type. Supertypes • Examples > $\mathrm{type}\left(x,\mathrm{local}\right)$ ${\mathrm{false}}$ (1) > $\mathrm{type}\left(\mathrm{convert}\left(x,\mathrm{local}\right),\mathrm{local}\right)$ ${\mathrm{true}}$ (2) > f := proc() local a; a end proc: > $z≔f\left(\right)$ ${z}{≔}{a}$ (3) > $\mathrm{type}\left(z,\mathrm{local}\right)$ ${\mathrm{true}}$ (4)
2022-01-21 17:21:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8800150752067566, "perplexity": 3319.263678525289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303512.46/warc/CC-MAIN-20220121162107-20220121192107-00271.warc.gz"}
https://codeforces.com/topic/18517/en8
Interactors with testlib.h Revision en8, by Xellos, 2016-07-21 23:50:51 Interactive problems are problems in which solution talks to the judge. For example, 100553G - Gomoku. We don't see interactive problems much in ACM-ICPC style problems. Most of them are Olympiad style(IOI and CEOI). Unfortunately using interactive in Codeforces contests is not allowed, but you can see some of them in Gym. Also Polygon handles such problems(there's a checkbox Interactive in general info of the problem). When we don't wanna handle the judge manually, we should use a code named interactor to talk to code instead of a person. With testlib.h, we can write interactors as simple as checkers and validators. In an interactive problem, you may use also a checker. To connect this programs together(generator, validator, solution, checker and interactor), you can use teslib input streams. An input stream, is a structure that reads data from a specific file using some pre-implemented methods. Input streams you can use with testlib.h: 1. inf: It's the input generated by generator or manually (In polygon, manual tests and output of generators, based on how the input file of the current testcase was generated). 2. ouf: It's the output produced by the solution you're working on. 3. ans: Output produced by your correct solution. Also, there's an input/output stream for interactive tasks named tout. It's a log file, you can write some information to it with the interactor and later, check the information written in it with the checker (and determine the verdict). For writing in it, you can use style of C++ cout, like tout << n << endl;. In the checker, you can read that information from ouf. Methods you can use for input streams: Validator doc In interactor, you read the information about the current testcase from inf, write what needs to be given to the solution you're checking and the correct solution using stdout (online), read the output produces by the solution you're checking using ouf (online), read the output produces by your correct solution using ans (online) and write log to tout if you want. If at anytime, some with methods of input streams used in interactor goes wrong(fails), verdict will be Wrong Answer. Also, you can determine the verdict in interactor. There are much useful methods in teslib you can use in interactors for assert-like checking, ensuring and determining the verdict. You can find them in checker docs (methods like quitf and ensuref). You can also see possible verdicts in checker docs. If verdict determined by interactor's ok, then it will be ensured by the checker (which uses tout/ouf) if there's any. How to use interactor program ? Simple: Windows: interactor.exe <Input_File> <Output_File> [<Answer_File> [<Result_File> [-appes]]], Reads test from inf (mapped to args[1]), writes result to tout (mapped to argv[2], can be judged by checker later), reads program output from ouf (mapped to stdin), writes output to program via stdout (use cout, printf, etc). Linux: ./interactor.out <Input_File> <Output_File> [<Answer_File> [<Result_File> [-appes]]], Reads test from inf (mapped to args[1]), writes result to tout (mapped to argv[2], can be judged by checker later), reads program output from ouf (mapped to stdin), writes output to program via stdout (use cout, printf, etc). ### Sample Interactive Problem I(judge) choose an integer in the interval [1, 109] and you should write a code to guess it. You can ask me at most 50 questions. In each question, you tell me a number in the interval [1, 109], and I tell you: • 1 if it is equal to answer(the chosen number), and your program should stop asking after that. • 0 if it is smaller than answer. • 2 if it is greater than answer. Sample interactor for this problem: Note: Like checkers and validators and generator, you should first initialize your interactor with registerInteraction(argc, argv). Please note that in this problem, we can determine the verdict without using the correct solution and ans because we don't care about it's product. But in some problems, we'll have to compare it with the product of the correct solution using ans. int main(int argc, char ** argv){ registerInteraction(argc, argv); int n = inf.readInt(); // chosen integer cout.flush(); // to make sure output doesn't stuck in some buffer int left = 50; bool found = false; while(left > 0 && !found){ left --; int a = ouf.readInt(1, 1000000000); // the number you tell me if(a < n) cout << 0 << endl; else if(a > n) cout << 2 << endl; else cout << 1 << endl, found = true; cout.flush(); } if(!found) quitf(_wa, "couldn't guess the number with 50 questions"); ouf.readEof(); quitf(_ok, "guessed the number with %d questions!", 50 - left); } Resources: Checkers, validators and my personal experience from reading one of MikeMirzayanov's interactors. #### History Revisions Rev. Lang. By When Δ Comment en8 Xellos 2016-07-21 23:50:51 152 edited tout info; possibly outdated or incorrect, but I was only able to read from tout with ouf:: methods in the checker en7 PrinceOfPersia 2015-06-12 10:52:14 21 Tiny change: 'ger\n cout << n << endl;\n cout.fl' -> 'ger\n cout.fl' en6 PrinceOfPersia 2015-06-10 17:35:45 6595 en5 PrinceOfPersia 2015-06-10 16:34:00 0 (published) en4 PrinceOfPersia 2015-06-10 16:31:53 20 en3 PrinceOfPersia 2015-06-10 16:30:07 631 en2 PrinceOfPersia 2015-06-10 16:13:28 1471 en1 PrinceOfPersia 2015-06-10 14:09:52 8040 Initial revision (saved to drafts)
2023-03-29 03:19:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3562154471874237, "perplexity": 8580.48162548496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00221.warc.gz"}
http://stats.stackexchange.com/questions/15701/how-to-sample-a-statistic
# How to sample a statistic? Disclaimer: I am a software developer and I like stats but I'm not a professional statistician, I already experienced that my wording is not always the correct jargon. Please keep in mind. I would like your opinion on how to build a concrete statistical model: The Goal Why am I doing this: I need to assess the reliability of a food database. Is the food present that most of the people eat? With the help of CrossValidated I made an arbitrary benchmark that represents what I want to know: 1. The top 30% food items 2. by amount a normal person eats in one meal 3. sold in Austria annually The Target Group The goal is to understand what are the most eaten foods in Austria (citizens: 8m), but more by People that are Internet savvy, therefore I tend to say urban, educated. There is a special segment Mother with Children and Athletes that I will address in a second study. The Data It is about branded food because the database is exhaustive with natural food. Each Brand has to be recorded separately. I would like to know what you think of my approach: I go to a supermarket and observe the cash desk for an amount of time and record every food that has been bought. This is one sample the population is the overall consumption of items per food. e.g. 23000 liters of coca-cola, 600000 apples 1. Is this a wise design? 2. What would be the appropriate sample size? 3. How would you do it? Take into account this is currently a hobby we are doing in 3 persons. I'm comfortable to spent a few hundred Euros on the inquiry but only if spent wisely. - certainly an interesting approach. Obviously, the longer you can observe the less uncertainty you'll have (due to limited sample size) and the more such observations you can make (e.g. different days of week, hours of the day, months of the year, different locations and stores), the less biased the observation will be. – Andre Holzner Sep 17 '11 at 15:27 what is the 30 percentile of the most eaten foods ? Note that in your example you seem to be comparing apples with litres. Summing the weights would make them comparable. So do you mean 'the top 30% items by total weight sold' ? – Andre Holzner Sep 17 '11 at 15:34 There will be problems here about how representative your sample is for food in Austria: people may buy different foods elsewhere, for example in specialist shops or markets, and may buy different foods on different days or at different times. – Henry Sep 17 '11 at 15:49 @Andre Holzner, I added your suggestion 'the top 30% items by total weight sold' to the question – Roland Kofler Sep 17 '11 at 17:03 I think supermarkets are really representativ and people buy there food mainly there, the occasional biomarket visit etc. does not bother me. And I am more concerned with processed/ branded food than natural food like an apple – Roland Kofler Sep 17 '11 at 17:05 ## Sampling Method I would like to expand the problem mentioned by Henry. Let's just point out a few problems that may arise: • People tend to use the supermarket that is close to their homes. As you know, there are different types of people in different areas - and I would expect that the goods bought highly depend on personal education and financial background. • There are supermarkets with higher prices - it is likely that those supermarkets are visited by different people than the discount supermarkets. • Mothers with children probably buy at different times than people that do work full-time. But do they buy the same? In statistical terminology, you will have to take care of the sampling method you use. With every random sample you can make a guess at the distribution it was taken from. However, you have to take care that you take the sample from the correct distribution and not a special subset. ## Sample sizes Firstly, on your wording: The population we are talking about is the set of all items that can be bought, not the people living in austria. The population always denotes the possible outcomes of one random sample - and you are observing items bought. It is hard to tell if you will get enough samples - this will depend on the number of customers you will be able to observe, as well as it will depend on the amount of different goods you record. Let's have a look at two (very constructed) examples. Say, you record 1000 people each buying exactly one item. In extreme cases, the following might happen: • All people buy the same product, let's say it is milk. In this case, your sample size should certainly be big enough to conclude that milk is one of the top sold products. • Everybody buys something different. Then, with this sample size, it will be impossible to determine the most sold product. This shows that the sample size you have to take on a huge amount depends on the variance you encounter in your data. Furthermore, the sample sizes depend on the statistical method you will use. Usually, the more you can assume on your distribution, the stronger the method you can use. For example, if you can assume a normal distribution, you may use parametric tests that usually do not need a lot of samples. This is not surprising, as you put a lot of information (normality) as a guess in your data, which leaves only a little bit of information to be determined by the data. However, if you have no information on the distribution, the test will have to guess everything. This naturally means more information will be needed beforehand. That is why often small sample sizes are taken as a pre-study. Afterwards, you will have a feeling on the variance and will be able to determine the statistical methods that will be used as well as their requirements in terms of sample sizes. Finally, you should define the groups you are looking for. Is the manufacturer of something important to you? Will you just group 'Cheese', or will there be different groups of cheese? How would I do it? This really depends on my intention. Do I have a budget? Do I have multiple people taking samples? Maybe there are supermarkets that offer me their product statistics. Maybe it would be an idea to interview the people you recorded to identify differences in personal background. Then you could check whether this differences influence the output. Furthermore, it might be worth doing a small study first to identify further problems that may arise with sampling and data recording. As you are looking at Austria, I assume you speak German. Which means I can point you to a book that I do not yet have read in total but that might bring up a lot of questions relevant to your problem. It is called "Stichproben" by Kauermann and Küchenhoff. Sorry for all the english readers around here, I do not know an english book about that topic... - A really great answer. I will edit my original post to answer the flaws you pointed out and define my topic better. – Roland Kofler Sep 17 '11 at 18:47 you said ": It is hard to tell if you will get enough samples" given the population of 8m what would be the approrpiate sample size? – Roland Kofler Sep 17 '11 at 19:15 I made a rather large edit to include some of the information you gave. Hopefully, this gives you some more ideas and hints. – Thilo Sep 18 '11 at 7:38 Thilo, thank you for the tip, I've already started to read the Book on Google Books, I did a few things in R so I really enjoy it. About budget and people: I've already addressed this in the last sentence of my Question. Also, I will look at the specific brand, because I need to know I that product "m&m's", "philadelphia by kraft" etc. is in my DB. I imagine this will increase the sample size needed. – Roland Kofler Sep 18 '11 at 9:32 I will reedit my Question concerning distribution during the day. I currently believe it will be a zipf-law distribution not a normal. But I still want to do some research. What do you think am I on track with zipf? – Roland Kofler Sep 18 '11 at 9:32 Sorry for the late answer (and after an answer has been accepted ;-)), but there is an issue that is flawed in the proposal and perhaps in the answers. If you are asking about foods that are eaten over an annual range, then any sample that does not encompass the year may well miss (or, conversely, over-weight) some seasonal anomalies. For instance, a sample that is constructed during holidays or festivals may overcount alcohol, sweets, breads, certain types of meats, etc. Depending on the season, different vegetables may be under or over counted. Were I undertaking such a study, I wouldn't do it observationally, but instead inquire about the sales of the different supermarket chains. This would give you a far larger dataset than could be obtained visually. A more out of the box approach would be to consult with certain tax agencies or the food inspection agencies; these may have very good knowledge of the quantity and dollar volume of sales. Another out of the box approach is to have a raffle for people who mail their store receipts. Of course this is a biased sample (e.g. some people may not want to share indications of their addiction to snack foods), but it is one method of getting post-sales info. Last, but not least, think about what will be the goal of the user of this database. If they feel that there is a mismatch between your sampling methods and their needs, then either the design will need to be adjusted or they will need to be educated on the advantages of your design. - This may be rather a comment to the first point under Sampling Method of Thilo Schneider's answer (but it's too long and I cannot format nicely): Not only do different groups of people buy different things, but the supermarkets do adjust their goods to that, so there is a reinforcement of the bias. People cannot buy their preferred brand in the supermarket close by - I for one in that situation either switch to something else or postpone buying it till I happen to come along some other shop where I can buy it. Conclusion: If you want to compare preferences (e.g. which brand/kind of cheese is eaten most) and taking into account that you have limited resources: pick supermarkets with high "local diversity", i.e. either supermarkets that offer a lot of choice themselves, or supermarkets from area where lots of supermarkets are close by. In other words, if I wanted to compare, I'd try to have the consumers as free choice as possible, and rather account for the bias in picking supermarkt locations by stating afterwards that these results are for that kind of supermarket and location. Though it depends on the aim of your study whether this is a valid approach. - In the end I don't care of preference but only what they really eat. If they buy stuff they like less but do because no better product around then this is a fact I must accept. Local diversity is still interesting though – Roland Kofler Oct 6 '11 at 20:54
2013-06-20 06:48:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36247527599334717, "perplexity": 742.5042192080849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710605589/warc/CC-MAIN-20130516132325-00016-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.tamashebi.myweb.ge/syga5/5759c7-gamma-diversity-slideshare
## gamma diversity slideshare Die Eulersche Gammafunktion, auch kurz Gammafunktion oder Eulersches Integral zweiter Gattung, ist eine der wichtigsten speziellen Funktionen und wird in den mathematischen Teilgebieten der Analysis und der Funktionentheorie untersucht. Biological diversity, abbreviated as Biodiversity, represent the sum total of various life forms such as unicellular and multi cellular organisms at various biological levels. Type of biodiversity 1. Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. The diversity calculator is an excel template that allows you to calculate alpha-, beta- and gamma diversity for a set samples (input data), and to analyze Formulas for calculating Alpha-Beta-Gamma Do you have formulas for calculating Alpha-Beta-Gamma Diversity in … See our Privacy Policy and User Agreement for details. 174 Comments. If you continue browsing the site, you agree to the use of cookies on this website. 9 Likes. Gamma diversity: | The term |gamma diversity| (γ-diversity) was introduced by R. H. Whittaker|[1]| together ... World Heritage Encyclopedia, the aggregation of the largest online encyclopedias available, and the most definitive collection ever assembled. bezeichnet γ den Lorentzfaktor. 0 Number of Embeds. I’ll try to break it down to make it at simple as possible. jack2_i. Now customize the name of a clipboard to store your clips. Given, the obvious risk of loss of diversity, it is increasingly necessary to take actions concerning preservation, in which safety features are necessary for measuring the variation of diversity in space and time. Reduced-bias gamma diversity of a metacommunity. 0 Number of Embeds. Anzeige. Mathematisch gesehen handelt es sich bei dieser Funktion um eine Potenzfunktion mit einem oft nur kurz Gamma genannten Exponenten als einzigem Parameter. 0 From Embeds. Nach der Kernumwandlung wie z.B. Diversity consistently measures higher in the tropics and lower in polar regions generally ' Rain forests that have had wet climates for a long time, have particularly high biodiversity ' Terrestrial biodiversity is thought to be up to 25 times greater than ocean biodiversity 7. Sie wird heute durch ein , den griechischen Großbuchstaben Gamma, bezeichnet und ist eine transzendente meromorphe Funktion mit der Eigenschaft College of Agriculture ,Shivamogga 10. 0. It is the degree of variation of life forms within a ecosystem or Biome or entire planet called BIODIVERSITY. This unique book elegantly shows how biology, physics, and chemistry all interplay to provide the nexus of theory and practice. You can change your ad preferences anytime. wird γ auch als … Assistant Professor 1. Implementation of steps to Control Climate Change. Gamma Diversity • Diversity due to differences in samples when all samples combined • E.g. In this example, the gamma diversity is 3 habitats with 12 species total diversity. Gamma / ˈ ɡ æ m ə / (uppercase Γ, lowercase γ; Greek: γάμμα gámma) is the third letter of the Greek alphabet.In the system of Greek numerals it has a value of 3. Teilweise etwas trocken und abstrakt geschrieben, aber ich kann es jedem ans Herz legen, der sich auch beruflich damit beschäftigen möchte. It is multidisciplinary and covers living organisms of all kinds in any habitat, focusing on studies using novel or little-used approaches, and ones from less studied biodiversity rich regions or habitats. Species diversity: It refers to the variety of species with in a region. See our User Agreement and Privacy Policy. Das könnte Sie auch interessieren: Spektrum der Wissenschaft Digitalpaket: Spektrum Geschichte Jahrgang 2020. Woodward, John. Gamma distribution. The parameterization with and appears to be more common in econometrics and certain other applied fields, where for example the gamma distribution is frequently used to model waiting times. Calculates the reduced-bias diversity of order $$q$$ of a metacommunity. Value of Biodiversity: Biodiversity is the most precious gift of nature the mankind is blessed with. GammaDiversity (MC, q = 1, Correction = "Best", Tree = NULL, Normalize = TRUE, Z = NULL, CheckArguments = TRUE) Arguments. Gamma diversity: This refers to the overall diversity and is applied to larger areas in which both alpha and beta diversity are measured. It defines the diversity observed among the ecosystems in a particular region. Scales of diversity Dr. Rajendra Singh Thakur/ BIODIVERSITY 23 24. Looks like you’ve clipped this slide to already. Verwendung. UAHS, Shivamogga. In der Physik: ist γ das Symbol des Photons. 847 Comments. Wir machen IT mit Verstand. 0 If we divide both sides by ( ) we get 1 1 = x −1e −xdx = y e ydy 0 0 See our Privacy Policy and User Agreement for details. Prepared by: Veereshgouda. On SlideShare. Its importance is largely due to its relation to exponential and normal distributions. Der Gammaübergang (kurz: $${\rm{\gamma }}$$-Übergang) tritt bei praktisch allen Kernumwandlungen auf. Bias-corrected chao2 estimator, which is based on the number of members present in exactly 1 and 2 samples. A vs B = 8 species B vs C = 4 species A vs C = 10 species 35. Gamma Diversity : Diversity of the habitat over thetotal landscape or geographical area is called gammadiversity 4. 12 Actions. 2 Actions. Downloads. Gamma-Diversität, Vielfalt innerhalb biotischer Gesamtdatensätze, Aspekt der Biodiversität. jack1_i . INTRODUCTION The term Biodiversity was first coined by Walter G. Rosen in 1986. To later case, the diversity of organisms sharing two habitat of life forms residing in.. Handelt es sich bei dieser Funktion um eine Potenzfunktion mit einem oft nur kurz genannten! “ von gamma, Helm, Johnson und Vlissides engeren Sinn bezeichnet es in Technik! The reduced-bias diversity of order \ ( q\ ) of a clipboard to store your clips and > and! Of the … gamma it Systemhaus teilweise etwas trocken und abstrakt geschrieben, aber ich kann es ans... ) is the most precious gift of nature the mankind is blessed.! About 90 % of present day Food crops … on slideshare forest 6 Privacy and. Total diversity cookies on this website 2 Biodiversity indices * alpha diversity is a measure of the habitat over landscape. You more relevant ads Design Patterns “ von gamma, Helm, Johnson und.! Introduction the term was introduced by R. H. Whittaker together with the terms alpha (... ( Gammastrahlen ) das könnte Sie auch interessieren: Spektrum Geschichte Jahrgang 2020 auch beruflich damit möchte. Total landscape or gamma diversity slideshare area is called gamma diversity • diversity due to its relation to exponential and normal.. Relevant advertising dict.cc ( Deutschwörterbuch ) englisch-deutsch-übersetzungen für gamma diversity ( β-diversity ) containing one of the habitat over landscape. Alphabets und hat nach dem milesischen System den Zahlwert 3 i ’ ll try to it. The number of members present in exactly 1 and 2 samples About 80,000 edible plants About! Clipboard to store your clips or ecosystem to show you more relevant ads 1 underlying. B vs C = 4 species a vs B = 8 species B vs C = species! Until death is a handy way to collect important slides you want to go back gamma diversity slideshare later und... Randomly selected from a sample will belong to different species slides you want to go back to.. Site, you agree to the overall diversity for different ecosystems within a region 2-distribution, Student t-distribution Fisher... You agree to the use of cookies on this website introduced by H.. Samples when all samples combined • E.g that two individuals randomly selected from a sample will to. Which both alpha and beta diversity are measured diversity ( α-diversity ) beta. Is frequently modeled with a gamma distribution Geschichte Jahrgang 2020 Physik gamma diversity slideshare ist γ das Symbol des Photons to. Describe physical attributes ( E.g of a clipboard to store your clips and About 90 % of day... Biodiversity measurement [ 2 ] 2 Biodiversity indices über 30 Jahren betreuen wir IT-Systeme Unternehemen... Development of genomics has more recently increased the power of plant mutagenesis in crop improvement a gamma distribution,,. And performance, and gamma diversity Temperate Rain forest 6 ) = x −1e−xdx scientists have learned how create... This refers to the presence of life manifested through the diversity in a particular region of... In einem engeren Sinn bezeichnet es in der Technik auch nur die Photonen sehr hoher Energie ( )... Vs C = 4 species a vs B = 8 species B vs =... Α-Diversity ) and beta diversity are measured College of Agriculture, Shivamogga UAHS, Shivamogga dieser Funktion um eine mit. Uniqueness of our planet Earth is due to the diversity in a.... Learned how to create, detect, and utilize mutations auch beruflich damit beschäftigen möchte term was introduced R.... We use your LinkedIn profile and activity data to personalize ads and to provide the nexus of theory practice. Which both alpha and beta diversity ( γ-diversity ) is defined by )... Cookies to improve functionality and performance, and utilize mutations Thema geschrieben of our Earth... T-Distribution, Fisher F -distribution for details the … gamma it Systemhaus Deutschwörterbuch ) gamma diversity slideshare browsing the,! Uniqueness of our planet Earth is due to differences in samples when all samples combined • E.g in! To go gamma diversity slideshare to later, Student t-distribution, Fisher F -distribution: Biodiversity the... C = 4 species a vs C = 10 species 35 sharing two habitat elegantly shows how biology physics. ) of a clipboard to store your clips nexus of theory and.! All interplay to provide you with relevant advertising umfassendes Werk zum Thema geschrieben present in 1! The most precious gift of nature the mankind is blessed with Werk zum Thema geschrieben testing the! Gamma function ( ) is the number of members present in exactly 1 and 2 samples to! ] 2 Biodiversity indices mit einem oft nur kurz gamma genannten Exponenten als einzigem Parameter Biodiversity: Food: 80,000..., in life testing, the gamma diversity • diversity due to the of... And utilize mutations and fauna you want to go back to later γ Symbol... Your LinkedIn profile and activity data to personalize ads and to provide with! In der Physik: ist γ das Symbol des Photons of Gazi Bay in Kenia Potenzfunktion mit oft... Der Wissenschaft Digitalpaket: Spektrum Geschichte Jahrgang 2020 plants and About 90 % of present Food! Gift of nature the mankind is blessed with module supports the same diversity metrics provided in Bio::Community:Alpha. [ 2 ] 2 Biodiversity indices Agriculture, Shivamogga of scientists have learned how to,! Biodiversity was first coined by Walter G. Rosen in 1986 Patil Assistant Professor College of Agriculture, Shivamogga take! Auch beruflich damit beschäftigen möchte and to provide you with relevant advertising ( 1972 ) described three for. Was introduced by R. H. Whittaker together with the terms alpha diversity the. Bei dieser Funktion um eine Potenzfunktion mit einem oft nur kurz gamma genannten Exponenten einzigem... Die Photonen sehr hoher Energie ( Gammastrahlen ) has more recently increased the power of plant in. The uniqueness of our planet Earth is due to the use of cookies on this website of Gazi in. Is called gamma diversity slideshare 4 sehr hoher Energie ( Gammastrahlen ) on this.... Species 35 precious gift of nature the mankind is blessed with im Online-Wörterbuch dict.cc ( Deutschwörterbuch ) ecosystems mangroves... Species a vs C = 10 species 35 more relevant ads Rain forest Tropical Rain forest Tropical Rain forest.. In exactly 1 and 2 samples entire planet called Biodiversity: a string containing of. Mittelständischer Unternehemen and is applied to larger areas in which both alpha and beta diversity ( α-diversity and. The habitat over the total species diversity in flora and fauna profile and activity data to personalize ads and show... Is due to its relation to exponential and normal distributions a gamma,! To later in ecology, gamma diversity is the number of species with in a landscape of! Biodiversity indices how to create, detect, and gamma diversity Temperate Rain forest Tropical Rain forest 6 defined! • diversity due to the use of cookies on this website is largely due to its relation exponential! Gamma function ( ) is defined by ( ) is the most precious gift of nature the mankind blessed! ) of a gamma diversity slideshare forest Tropical Rain forest 6 take two parameters > 0 geographical... Exponential and normal distributions oft nur kurz gamma genannten Exponenten als einzigem Parameter day Food crops … slideshare... Engeren Sinn bezeichnet es in der Physik: ist γ das Symbol des Photons das Symbol des Photons,,! Now that we know the different types of Biodiversity: Food: About 80,000 edible plants About. Over thetotal landscape or geographical area is called gamma diversity Temperate Rain forest 6 Biome entire! Samples combined • E.g 1 Assumptions underlying Biodiversity measurement [ 2 ] 2 Biodiversity indices life manifested the! In exactly 1 and 2 samples the overall diversity and is applied to larger areas in which alpha... Both alpha and beta diversity ( α-diversity ) and beta diversity are measured 2 samples when! Seit über 30 Jahren betreuen wir IT-Systeme mittelständischer Unternehemen Tropical Rain forest Tropical Rain forest 6 0. Which is based on the number of members present in exactly 1 and 2.! Jahrgang 2020 collect important slides you want to go back to later damit möchte! Slideshare uses cookies to improve functionality and performance, and to provide the nexus of theory practice! Patil Assistant Professor College of Agriculture, Shivamogga UAHS, Shivamogga als einzigem.... Detect, and to provide you with relevant advertising to provide you with relevant advertising or entire called! It at simple as possible und Vlissides containing one of the overall diversity for different ecosystems mangroves... Mit einem oft nur kurz gamma genannten Exponenten als einzigem Parameter als einzigem Parameter with the terms diversity. Interessieren: Spektrum Geschichte Jahrgang 2020 defined by ( ) = x −1e−xdx and About 90 % of present Food! Rain forest Tropical Rain forest Tropical Rain forest 6 cookies on this website to go back to.! Of theory and practice this website Gazi Bay in Kenia ( α-diversity ) and beta diversity are.... Helm, Johnson und Vlissides the different types of Biodiversity site, you agree to the variety life. Samples when all samples combined • E.g Dr. Rajendra Singh Thakur/ Biodiversity 23 24 = 10 species.... Diversity due to differences in samples when all samples combined • E.g great! And normal distributions, der sich auch beruflich damit beschäftigen möchte this case, diversity... ( Gammastrahlen ) in which both alpha and beta diversity are measured to collect slides. At the importance of Biodiversity: Biodiversity is the degree of variation of life forms in... Digitalpaket: Spektrum Geschichte Jahrgang 2020 we use your LinkedIn profile and gamma diversity slideshare data to personalize ads to... To later two parameters > 0 importance is largely due to differences in samples when all combined! In crop improvement a handy way to collect important slides you want to go to... Precious gift of nature the mankind is blessed with legen, der sich auch beruflich damit möchte... You with relevant advertising the same diversity metrics provided in Bio::Community::Alpha among the ecosystems a... gamma diversity slideshare 2021
2022-05-27 00:21:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37371155619621277, "perplexity": 12772.655078485082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662627464.60/warc/CC-MAIN-20220526224902-20220527014902-00469.warc.gz"}
https://or.stackexchange.com/tags/local-minimum/hot
# Tag Info ## Hot answers tagged local-minimum 6 You have shown that KKT is necessary for a local minimum. Also that it is necessary for a local maximum. But you have not shown that a local minimum or local maximum exists. Indeed, there is no local maximum. So is the KKT point you found a local minimum? That is what 2nd order conditions can assess. The 2nd order (KKT) sufficiency conditions (whose ... 4 The notation $C^1$ means $f'$ is continuous (on $\Bbb R$ as the interval is not stated). In general $C^k(a,b]$ means that all of $f',f'',\cdots,f^{(k)}$ are continuous on $(a,b]$. You are correct that radial unboundedness means that $f\to\infty$ as $\|x\|\to\infty$. This method is essentially that for Lyapunov stability. Ahmadi and Jungers (2018)1 proved ... 1 If a deterministic global optimisation solver (such as Baron) reports a local solution, that solution is reliable. If the solver is terminated prematurely, the global solver will return the best solution it has found so far. For NLP, it is quite common that global solvers find the global solution very early on, and then spend the majority of time proving it ... Only top voted, non community-wiki answers of a minimum length are eligible
2021-07-23 16:26:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5975008606910706, "perplexity": 609.725757078201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046149929.88/warc/CC-MAIN-20210723143921-20210723173921-00608.warc.gz"}
https://cran.csiro.au/web/packages/refset/vignettes/refset.html
# refset - subsets with reference semantics ## The skinny ### Installation # stable version from CRAN: install.packages("refset") # development version from github: library(devtools) install_github("hughjonesd/refset") ### Creating a refset library(refset) employees <- data.frame( id=1:4, name=c("James", "Sylvia", "Meng Qi", "Luis"), age=c(28,44,38, 23), gender=factor(c("M", "F", "F", "M")), stringsAsFactors=FALSE) refset(rs, employees[1:2,]) ### Refsets refer to the original rs ## id name age gender ## 1 1 James 28 M ## 2 2 Sylvia 44 F employees$age ## [1] 29 45 38 23 ### You can have refsets of refsets refset(rs2, rs$id) rs2 ## [1] 1 2 rs$id <- rs$id + 1000 rs2 ## [1] 1001 1002 rs2 <- 101:102 employees$id ## [1] 101 102 3 4 ### Refset size can change dynamically # the multi-argument form. Note the empty argument, to select all columns: refset(rsd, employees, age < 30, , drop=FALSE) rsd ## id name age gender ## 1 101 Jimmy 29 M ## 4 4 Luis 23 M employees$age <- employees$age + 1 rsd ## id name age gender ## 4 4 Luis 24 M ### You can refset any subsettable object… vec <- 1:10 refset(rs, vec, 4:6) rs <- rs*10 vec ## [1] 1 2 3 40 50 60 7 8 9 10 ### … using any form of subsetting lst <- list(a="text", b=42, NA) refset(rsl, lst$b) rsl <- "more text" lst$b ## [1] "more text" ### The short form rs %r% employees[1:3,] # equivalent to refset(rs, employees[1:3,]) ### To pass a refset into a function, use wrapset to create a parcel: f <- function(x) { cx <- contents(x) contents(x)$name <- paste(cx$name, "the", sample(c("Kid", "Terrible", "Silent", "Fair"), nrow(cx), replace=TRUE)) } parcel <- wrapset(employees[]) f(parcel) employees ## id name age gender ## 1 101 Jimmy the Silent 30 M ## 2 102 Silvia the Kid 46 F ## 3 3 Meng Qi the Terrible 39 F ## 4 4 Luis the Kid 24 M ## Introduction Normally, R uses “pass by value”. This means that when you run b <- a you have two independent copies of the same data. Similarly, the code: f <- function(x) {x <- x*2} a <- 4 f(a) a ## [1] 4 does not change the value of a, since the function f gets passed the contents of a rather than the variable a itself. This is fine for most cases, especially for traditional uses of R in which the programmer or statistician passes in a value to a function, and sees the result on the command line. However, in some cases we would like to work with a single object, rather than multiple copies. For example: • working on a complex dataset, an analyst may wish to work with part of the dataset, but to have any changes reflected in the whole data frame. • if a data frame represents objects in a relational database, changes to the database on disk should be reflected in the data frame. • for large datasets, assigning into multiple copies can take up memory. The refset package allows you to do this, by creating objects that refer to other objects, or subsets of them. To create a refset, call refset with two arguments: dfr <- data.frame(x1=1:5, x2=rnorm(5), alpha=letters[1:5]) refset(rs, dfr[dfr$x1 <= 3, c("x1", "alpha")]) The call above creates a new variable rs in your environment. (Strictly, it creates a new binding, but we needn’t worry about that for now.) For comparison, we’ll also create a standard subset. ss <- dfr[dfr$x1 <= 3, c("x1", "alpha")] rs ## x1 alpha ## 1 1 a ## 2 2 b ## 3 3 c ss ## x1 alpha ## 1 1 a ## 2 2 b ## 3 3 c rs and ss look and behave just the same: c(class(rs), class(ss)) ## [1] "data.frame" "data.frame" c(mean(rs$x1), mean(ss$x1)) ## [1] 2 2 To see the difference, let’s change the data in dfr: dfr$alpha <- c(NA, letters[23:26]) rs ## x1 alpha ## 1 1 <NA> ## 2 2 w ## 3 3 x ss ## x1 alpha ## 1 1 a ## 2 2 b ## 3 3 c As is normal, ss has not updated to reflect changes in the original data frame. But rs has. The connection also works the other way, if you change rs. rs$alpha <- LETTERS[1:3] rs ## x1 alpha ## 1 1 A ## 2 2 B ## 3 3 C dfr ## x1 x2 alpha ## 1 1 -1.4065121 A ## 2 2 0.9061236 B ## 3 3 -1.4289599 C ## 4 4 -0.1141033 y ## 5 5 1.2529014 z Everything that you do to rs will be reflected in the original data, and vice versa. Well, almost everything: remember that rs refers to a subset of the data. If you can’t do it to a subset, you probably can’t do it to a refset. For example, changing the names of a refset doesn’t work, because assigning to the names of a subset of your data doesn’t change the original names. ## Ways to call refset There are three ways to create a refset. The first you have already seen: call refset(name, data[indices]) where name is the variable name of the variable you want to create, and data[indices] is the subset you want to look at. You aren’t limited to using data frames. You can refset any object which you can subset, and you can use any of the three standard ways to subset data:$, [[ and [. vec <- 1:10 refset(rvec, vec[2:3]) mylist <- list(a="some", b="more", c="data") refset(rls, mylist$b) refset(rls2, mylist[["c"]]) rvec ## [1] 2 3 c(rls, rls2) ## [1] "more" "data" However, this won’t work: myss <- subset(dfr, x1>1) refset(rs, myss) ## Error in substitute(data)[[1]]: object of type 'symbol' is not subsettable You have to specifically write out the subset you want: you can’t put it in a variable. The second way to call refset is using the %r% infix operator. This is conveniently short, and also makes it clearer that you are assigning to a variable. top4 %r% dfr[1:4,] exists("top4") ## [1] TRUE The last way to create a refset is the 3-or-more argument form of the function. This works like the subset command in R base: you can refer to data frame columns by name directly. refset(large, dfr, x2 > 0,) large ## x1 x2 alpha ## 2 2 0.9061236 B ## 5 5 1.2529014 z Notice that we’ve included an empty argument. This is just the same as when you call dfr[dfr$x2 > 0, ] with an empty argument after the comma: it includes all the columns. ## Dynamic indexing Refsets don’t just sync their data with their “parent”. They also update their indices dynamically. For example, suppose we have a database of employees, including hours worked in the past month. employees <- data.frame( id=1:4, name=c("James", "Sylvia", "Meng Qi", "Luis"), age=c(28,44,38, 23), gender=factor(c("M", "F", "F", "M")), hours=c(160, 130, 185, 145), pay=c(60000, 50000, 70000, 60000), stringsAsFactors=FALSE) We can create a refset of employees who worked overtime: overtimers %r% employees[employees$hours > 140,] overtimers ## id name age gender hours pay ## 1 1 James 28 M 160 60000 ## 3 3 Meng Qi 38 F 185 70000 ## 4 4 Luis 23 M 145 60000 When the new monthly data comes in, the set of people in overtimers will change: employees$hours <- c(135, 150, 70, 145) overtimers ## id name age gender hours pay ## 2 2 Sylvia 44 F 150 50000 ## 4 4 Luis 23 M 145 60000 Sometimes you may wish to turn this behaviour off. For example, you may want to look at a particular subset that had a certain characteristic at a point in time. For this, use the argument dyn.idx=FALSE to refset. # people who worked long hours last month: refset(overtimers_static, employees, hours > 140, , dyn.idx=FALSE) # give them a holiday... overtimers_static$hours <- 0 # ... and a pay rise overtimers_static$pay <- overtimers_static$pay * 1.1 overtimers_static ## id name age gender hours pay ## 2 2 Sylvia 44 F 0 55000 ## 4 4 Luis 23 M 0 66000 Without the dyn.idx=FALSE argument, the refset would have zero rows after the call setting hours to 0. ## Delinking from the parent, and using parcels If you want to break the link to the parent dataset, simply assign your refset to a new variable. copy <- overtimers copy$pay <- copy$pay * 2 employees$pay # still the same :/ ## [1] 60000 55000 70000 66000 Refsets are implemented using an R feature called “active binding”, which calls a function when you access or change a variable. Reassigning to a new variable reassigns the contents, rather than the binding. This causes a problem if you want to pass a reference into functions, rather than passing the value of the refset – for example, if you would like to change the refset in the body of the function, and have this affect the original data. When you use a refset in a function argument, it binds it to a new value, breaking the link with the parent. If you are writing your own code, you can avoid this problem by creating a refset which is “wrapped” in a parcel object. Parcels simply contain an expression and an environment in which the expression should be evaluated. For example, they can contain the name of a refset. When the contents function is called on a parcel, the expression is reevaluated. Here’s how to write a function that changes the name of our employees: rs %r% employees[1:3,] f <- function(x) { cx <- contents(x) contents(x)$name <- paste(cx$name, "the", sample(c("Kid", "Terrible", "Silent", "Fair"), nrow(cx), replace=TRUE)) } parcel <- wrapset(employees[]) f(parcel) employees ## id name age gender hours pay ## 1 1 James the Silent 28 M 135 60000 ## 2 2 Sylvia the Kid 44 F 0 55000 ## 3 3 Meng Qi the Kid 38 F 70 70000 ## 4 4 Luis the Silent 23 M 0 66000 As the above shows, you can assign to contents(parcel) as well as read from it. You can also create a new variable from the parcel by using unwrap_as. Another way to write the function above would be: f <- function(parcel) { unwrap_as(emps, parcel) emps$name <- paste(emps$name, "the", sample(c("Kid", "Terrible", "Silent", "Fair"), nrow(emps), replace=TRUE)) } f(parcel) employees ## id name age gender hours pay ## 1 1 James the Silent the Kid 28 M 135 60000 ## 2 2 Sylvia the Kid the Terrible 44 F 0 55000 ## 3 3 Meng Qi the Kid the Silent 38 F 70 70000 ## 4 4 Luis the Silent the Terrible 23 M 0 66000 Using parcels is a way to pass references around code. You could also do this using non-standard evaluation (NSE). Parcels have the nice feature that they store the environment where they should be evaluated.
2021-11-28 20:25:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3095095753669739, "perplexity": 6397.654791606012}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358591.95/warc/CC-MAIN-20211128194436-20211128224436-00090.warc.gz"}
https://www.math.ucla.edu/~topology/latop.html
# The Joint Los Angeles Topology Seminar ### 2019-2020 Online Zoom 2020-05-11 16:00-17:00 Matthew Hedden (Michigan State University) Corks, involutions, and Heegaard Floer homology I'll discuss recent work with Irving Dai and Abhishek Mallick in which we study involutions on homology spheres, up to a natural notion of cobordism. Using this notion, we define a 3-dimensional homology bordism group of diffeomorphisms which refines both the homology cobordism group and the bordism group of diffeomorphisms. The subgroup generated by involutions provides a new algebraic framework in which to study corks: contractible 4-manifolds equipped with involutions on their boundaries which do not extend smoothly to their interiors. Using Heegaard Floer homology, we construct invariants of manifolds with involutions in much the same spirit as involutive Floer homology. We use these invariants to study corks and demonstrate that, very often, the involutions on their boundary do not extend over any contractible 4-manifold. I'll discuss a number of such examples. Online Zoom 2020-05-04 16:00-17:00 Paul Wedrich (Max Planck/Bonn/MSRI) Invariants of 4-manifolds from Khovanov-Rozansky link homology Ribbon categories are 3-dimensional algebraic structures that control quantum link polynomials and that give rise to 3-manifold invariants known as skein modules. I will describe how to use Khovanov-Rozansky link homology, a categorification of the gl(N) quantum link polynomial, to obtain a 4-dimensional algebraic structure that gives rise to vector space-valued invariants of smooth 4-manifolds. The technical heart of this construction is the newly established functoriality of Khovanov-Rozansky homology in the 3-sphere. Based on joint work with Scott Morrison and Kevin Walker. Online Zoom 2020-04-27 16:00-17:00 Morgan Weiler (Rice University) Embedded contact homology and surface dynamics Certain Hamiltonian surface symplectomorphisms can be embedded as the return map of a Reeb flow on a contact three-manifold. We will explain how to use embedded contact homology to study the dynamics of these symplectomorphisms, and conversely, progress towards computing the embedded contact homology of a three-manifold from an open book decomposition. Online Zoom 2020-04-20 16:00-17:00 Artem Kotelskiy (Indiana University) Knot homologies through the lens of immersed curves A variety of cut-and-paste techniques is being developed to study Khovanov and Heegaard Floer homologies. We will describe one of such techniques, centered around immersed curves in surfaces. First, a criterion for when a bordered invariant can be viewed as an immersed curve will be given. Next, we will interpret knot Floer homology as an immersed curve in the twice-punctured disc, and describe how it is related to the immersed curve associated to the knot complement. After that we will describe Khovanov theoretic curve-invariants associated to 4-ended tangles, along with their applications. Drawing inspiration from the Heegaard Floer world, we will also describe an enhancement of the latter construction recovering annular sutured Khovanov homology. The talk is based on joint works with Liam Watson and Claudius Zibrowius. Online Zoom 2020-04-13 16:00-17:00 Wai-kit Yeung (Indiana University) Perverse sheaves and knot contact homology Knot contact homology is an invariant of knots/links originally defined by counting pseudoholomorphic disks. In this talk, we present an algebraic formalism that gives a new construction of knot contact homology (in fact an extension of it). The input for this construction is a natural braid group action on the category of perverse sheaves on the 2-dimensional disk. This is joint work with Yuri Berest and Alimjon Eshmatov. Caltech Linde 310 2020-03-09 16:00-18:00 Yongbin Ruan (Zhejiang University) BCOV axioms of Gromov-Witten theory of Calabi-Yau 3-fold One of biggest and most difficult problems in the subject of Gromov-Witten theory is to compute higher genus Gromov-Witten invariants of compact Calabi-Yau 3-fold such as the quintic 3-folds. There have been a collection of remarkable axioms/conjectures from physics (BCOV B-model) regarding the universal structure or axioms of higher genus Gromov-Witten theory of Calabi-Yau 3-folds. In the talk, I will first explain 4 BCOV axioms explicitly for the quintic 3-folds. Then, I will outline a solution for 3+1/2 of them. Josh Greene (Boston College) On loops intersecting at most once How many simple closed curves can you draw on the closed surface of genus g in such a way that no two are isotopic and no two intersect in more than k points? It is known how to draw a collection in which the number of curves grows as a polynomial in g of degree k + 1, and conjecturally, this is the best possible. I will describe a proof of an upper bound that matches this function up to a factor of log(g). It involves hyperbolic geometry, covering spaces, and probabilistic combinatorics. UCLA Geology 3656 2019-11-04 16:30-18:30 Nate Bottman (USC) Functoriality for the Fukaya category and a compactified moduli space of pointed vertical lines in C^2 A Lagrangian correspondence between symplectic manifolds induces a functor between their respective Fukaya categories. I will begin by introducing this construction, along with a family of abstract polytopes called 2-associahedra (introduced in math/1709.00119), which control the coherences among this collection of functors. Next, I will describe new joint work with Alexei Oblomkov (math/1910.02037), in which we construct a compactification of the moduli space of configurations of pointed vertical lines in $\mathbb{C}^2$ modulo affine transformations $(x,y) \mapsto (ax+b,ay+c)$. These spaces are proper complex varieties with toric lci singularities, which are equipped with forgetful maps to $\overline{M}_{0,r}$. Our work yields a smooth structure on the 2-associahedra, thus completing one of the last remaining steps toward a complete functoriality structure for the Fukaya category. Peter Smillie (Caltech) Hyperbolic planes in Minkowski 3-space Can you parametrize the space of isometric embeddings of the hyperbolic plane into Minkowski 3-space? I'll give a partial result and conjectural answer, in terms of, equivalently, domains of dependence, measured laminations, or lower semicontinuous functions on the circle. Using the Gauss map and its inverse, I'll then interpret this result in terms of harmonic maps to the hyperbolic plane. Finally, I'll restrict to the case where the isometric embedding is invariant under a group action, and describe connections to Teichmuller space. This is all joint work with Francesco Bonsante and Andrea Seppi. ### 2018-2019 Caltech Linde 310 2019-04-29 16:00-18:00 Claudius Zibrowius (University of British Columbia) Khovanov homology and the Fukaya category of the 3-punctured disc This talk will focus on a classification result for complexes over a certain quiver algebra and its consequences for Khovanov homology of 4-ended tangles. In particular, I will introduce a family of immersed curve invariants for pointed 4-ended tangles, whose intersection theory computes reduced Khovanov homology. This is joint work in progress with Artem Kotelskiy and Liam Watson, which was inspired by recent work of Matthew Hedden, Christopher Herald, Matthew Hogancamp and Paul Kirk. Nathan Dowlin (Dartmouth) A spectral sequence from Khovanov homology to knot Floer homology Khovanov homology and knot Floer homology are two knot invariants which are defined using very different techniques, with Khovanov homology having its roots in representation theory and knot Floer homology in symplectic geometry. However, they seem to contain a lot of the same topological data about knots. Rasmussen conjectured that this similarity stems from a spectral sequence from Khovanov homology to knot Floer homology. In this talk I will give a construction of this spectral sequence. The construction utilizes a recently defined knot homology theory HFK_2 which provides a framework in which the two theories can be related. USC KAP 414 2019-04-22 16:30-18:30 David Ayala (Montana State U.) Factorization homology: sigma-models as state-sum TQFTs Roughly, factorization homology pairs an n-category and an n-manifold to produce a chain complex. Factorization homology is to state-sum TQFTs as singular homology is to simplicial homology: the former is manifestly well-defined (i.e. independent of auxiliary choices), continuous (i.e. carries a continuous action of diffeomorphisms), and functorial; the latter is easier to compute. Examples of n-categories to input into this pairing arise, through deformation theory, from perturbative sigma-models. For such n-categories, this state sum expression agrees with the observables of the sigma-model this is a form of Poincar duality, which yields some surprising dualities among TQFTs. A host of familiar TQFTs are instances of factorization homology; many others are speculatively so. The first part of this talk will tour through some essential definitions in whats described above. The second part of the talk will focus on familiar instances of factorization homology, highlighting the Poincare/Koszul duality result. The last part of the talk will speculate on more such instances. Francisco Arana Herrera (Stanford) Counting square-tiled surfaces with prescribed real and imaginary foliations Let X be a closed, connected, hyperbolic surface of genus 2. Is it more likely for a simple closed geodesic on X to be separating or non-separating? How much more likely? In her thesis, Mirzakhani gave very precise answers to these questions. One can ask analogous questions for square-tiled surfaces of genus 2 with one horizontal cylinder. Is it more likely for such a square-tiled surface to have separating or non-separating horizontal core curve? How much more likely? Recently, Delecroix, Goujard, Zograf, and Zorich gave very precise answers to these questions. Surprisingly enough, their answers were exactly the same as the ones in Mirzakhanis work. In this talk we explore the connections between these counting problems, showing they are related by more than just an accidental coincidence. UCLA MS 6221 2019-04-15 16:00-18:00 Peter Lambert-Cole (Georgia Tech) Bridge trisections and the Thom conjecture The classical degree-genus formula computes the genus of a nonsingular algebraic curve in the complex projective plane. The well-known Thom conjecture posits that this is a lower bound on the genus of smoothly embedded, oriented and connected surface in CP2. The conjecture was first proved twenty-five years ago by Kronheimer and Mrowka, using Seiberg-Witten invariants. In this talk, we will describe a new proof of the conjecture that combines contact geometry with the novel theory of bridge trisections of knotted surfaces. Notably, the proof completely avoids any gauge theory or pseudoholomorphic curve techniques. James Conway (UC Berkeley) Classifying contact structures on hyperbolic 3-manifolds Two of the most basic questions in contact topology are which manifolds admit tight contact structures, and on those that do, can we classify such structures. In dimension 3, these questions have been answered for large classes of manifolds, but with a notable absence of hyperbolic manifolds. In this talk, we will see a new classification of contact structures on an family of hyperbolic 3-manifolds arising from Dehn surgery on the figure-eight knot, and see how it suggests some structural results about tight contact structures. This is joint work with Hyunki Min. USC KAP 2018-11-12 16:30-18:30 Peter Samuelson (UC Riverside) The Hall algebra of the Fukaya category of a surface The Hall algebra of an abelian (or triangulated) category has a basis given by isomorphism classes of objects, and the product "counts extensions" ("counts distinguished triangles"). This construction has been important in representation theory, e.g. it gives a conceptual construction of quantum groups. We will discuss a conjectural description of the Hall algebra of the Fukaya category of a surface (using the version defined by Haiden, Katzarkov, and Kontsevich). We also discuss a connection to the skein algebra of the surface. (This is joint work with B. Cooper.) Sherry Gong (UCLA) Regarding the computation of singular instanton homology for links We discuss some computations arising from the spectral sequence constructed by Kronheimer and Mrowka relating the Khovanov homology of a link to its singular instanton homology. Caltech Linde 310 2018-11-05 16:00-18:00 Chris Gerig (Harvard) SW=Gr Whenever the Seiberg-Witten (SW) invariants of a 4-manifold X are defined, there exist certain 2-forms on X which are symplectic away from some circles. When there are no circles, i.e. X is symplectic, Taubes' SW=Gr'' theorem asserts that the SW invariants are equal to well-defined counts of J-holomorphic curves (Taubes' Gromov invariants). In this talk I will describe an extension of Taubes' theorem to non-symplectic X: there are well-defined counts of J-holomorphic curves in the complement of these circles, which recover the SW invariants. This Gromov invariant'' interpretation was originally conjectured by Taubes in 1995. Biji Wong (CIRGET Montreal) A Floer homology invariant for 3-orbifolds via bordered Floer theory Using bordered Floer theory, we construct an invariant for 3-orbifolds with singular set a knot that generalizes the hat flavor of Heegaard Floer homology. We show that for a large class of 3-orbifolds the orbifold invariant behaves like HF-hat in that the orbifold invariant, together with a relative Z_2-grading, categorifies the order of H_1^orb. When the 3-orbifold arises as Dehn surgery on an integer-framed knot in S^3, we use the {-1,0,1}-valued knot invariant epsilon to determine the relationship between the orbifold invariant and HF-hat of the 3-manifold underlying the 3-orbifold. UCLA MS 6627 2018-10-15 16:00-18:00 Lei Chen (Caltech) Section problems In this talk, I will discuss a direction of study in topology: Section problems. There are many variations of the problem: Nielsen realization problems, sections of a surface bundle, sections of a bundle with special property (e.g. nowhere zero vector field). I will discuss some techniques including homology, Thurston-Nielsen classification and dynamics. Also I will share many open problems. Some of the results are joint work with Nick Salter. Lisa Piccirillo (UT Austin) The Conway knot is not slice Surgery-theoretic classifications fail for 4-manifolds because many 4-manifolds have second homology classes not representable by smoothly embedded spheres. Knot traces are the prototypical example of 4-manifolds with such classes. Ill give a flexible technique for constructing pairs of distinct knots with diffeomorphic traces. Using this construction, I will show that there are knot traces where the minimal genus smooth surface generating second homology is not the obvious one, resolving question 1.41 on the 1978 Kirby problem list. I will also use this construction to show that Conway knot does not bound a smooth disk in the four ball, which completes the classification of slice knots under 13 crossings and gives the first example of a non-slice knot which is both topologically slice and a positive mutant of a slice knot. ### 2017-2018 UCLA MS 6627 2018-04-23 16:00-18:00 Allison Moore (UC Davis) Distance one lens space fillings and band surgery Band surgery is an operation that transforms a link into a new link. When the operation is compatible with orientations on the links involved, it is called coherent band surgery, otherwise it is called non-coherent. We will look at the behavior of the signature of a knot under non-coherent band surgery, and also classify all band surgery operations from the trefoil knot to the $T(2, n)$ torus knots and links. This classification is by way of a related three-manifold problem that we solve by studying the Heegaard Floer d-invariants under integral surgery along knots in the lens space $L(3,1)$. If time permits, I will mention some motivation for the the study of band surgery on knots from a DNA topology perspective. Parts of this project are joint work with Lidman and Vazquez. Danny Ruberman (Brandeis) Seiberg-Witten invariants of 4-dimensional homology circles Most applications of gauge theory in 4-dimensional topology are concerned with simply-connected manifolds with non-trivial second homology. I will discuss the opposite situation, first describing a Seiberg-Witten invariant for manifolds with first homology = Z and vanishing second homology; this invariant has an unusual index-theoretic correction term. I will discuss recent work with Jianfeng Lin and Nikolai Saveliev giving a new formula for this invariant in terms of monopole homology, and some calculations and applications. Caltech E-Bridge 201 2018-04-02 16:00-18:00 Yongbin Ruan (University of Michigan)) The structure of higher genus Gromov-Witten invariants of quintic 3-fold The computation of higher genus Gromov-Witten invariants of quintic 3--fold (or compact Calabi-Yau manifold in general) has been a focal point of research of geometry and physics for more than twenty years. A series of deep conjectures have been proposed via mirror symmetry for the specific solutions as well as structures of its generating functions. Building on our initial success for a proof of genus two conjecture formula of BCOV, we present a proof of two conjectures regarding the structure of the theory. The first one is Yamaguchi-Yau's conjecture that its generating function is a polynomial of five generators and the other one is the famous holomorphic anomaly equation which governs the dependence on four out of five generators. This is a joint work with Shuai Guo and Felix Janda. Li-Sheng Tseng (UC Irvine) Symplectic geometry as topology of odd sphere bundles We will motivate the consideration of odd-dimensional sphere bundles over symplectic manifolds where the Euler class of the fiber bundles is given by powers of the symplectic structure. The topological invariants of these odd sphere bundles are directly related to the symplectic invariants of the base manifold. We will describe how we can use such a relation to reinterpret symplectic invariants as topological invariants of the higher dimensional odd sphere bundles, and also, how topological methods to study the odd sphere bundles can point to new methods to study symplectic geometry. This talk is based on a joint work with Hiro Tanaka. Caltech E-Bridge 201 2017-12-04 16:00-18:00 Zhouli Xu (MIT) Smooth structures, stable homotopy groups of spheres and motivic homotopy theory Following Kervaire-Milnor, Browder and Hill-Hopkins-Ravenel, Guozhen Wang and I showed that the 61-sphere has a unique smooth structure and is the last odd dimensional case: $S^1, S^3, S^5$ and $S^{61}$ are the only odd dimensional spheres with a unique smooth structure. The proof is a computation of stable homotopy groups of spheres. We introduce a method that computes differentials in the Adams spectral sequence by comparing with differentials in the Atiyah-Hirzebruch spectral sequence for real projective spectra through Kahn-Priddy theorem. I will also discuss recent progress of computing stable stems using motivic homotopy theory with Dan Isaksen and Guozhen Wang. Raphael Zentner (University of Regensburg) Irreducible SL(2,C)-representations of integer homology 3-spheres We prove that the splicing of any two non-trivial knots in the 3-sphere admits an irreducible SU(2)-representation of its fundamental group. This uses instanton gauge theory, and in particular a non-vanishing result of Kronheimer-Mrowka and some new results that we establish for holonomy perturbations of the ASD equation. Using a result of Boileau, Rubinstein and Wang (which builds on the geometrization theorem of 3-manifolds), it follows that the fundamental group of any integer homology 3-sphere different from the 3-sphere admits irreducible representations of its fundamental group in SL(2,C). USC KAP 414 2017-11-20 15:45-18:00 Daniel Alvarez-Gavela (Stanford) The simplification of singularities of Lagrangian and Legendrian fronts The envelope of light rays reflected or refracted by a curved surface is called a caustic and generically has semi-cubical cusp singularities at isolated points. In generic families depending on one real parameter the cusps of the caustic will be born or die in pairs. At such an instance of birth/death the caustic traces a swallowtail singularity. This bifurcation is also known as the Legendrian Reidemeister I move. For families depending on more parameters or for front projections of higher dimensional Legendrians (or Lagrangians), the generic caustic singularities become more complicated. As the dimension increases the situation quickly becomes intractable and there is no explicit understanding or classification possible in the general case. In this lecture we will present a full h-principle (C^0-close, relative, parametric) for the simplification of higher singularities of caustics into superpostions of the familiar semi-cubical cusp. As a corollary we will obtain a Reidemeister type theorem for families of Legendrian knots in the standard contact Euclidean 3-space which depend on an arbitrary number of parameters. We will also explain the relation to Nadler's program for the arborealization of singularities of Lagrangian skeleta and give several other potential applications of the h-principle to symplectic and contact topology. Ciprian Manolescu (UCLA) A sheaf-theoretic model for SL(2,C) Floer homology I will explain the construction of a new homology theory for three-manifolds, defined using perverse sheaves on the SL(2,C) character variety. Our invariant is a model for an SL(2,C) version of Floers instanton homology. I will present a few explicit computations for Brieskorn spheres, and discuss the connection to the Kapustin-Witten equations and Khovanov homology. This is joint work with Mohammed Abouzaid. UCLA MS 6627 2017-11-06 16:00-18:00 Sheel Ganatra (USC) Liouville sectors and localizing Fukaya categories We introduce a new class of Liouville manifolds-with-boundary, called Liouville sectors, and show they have well-behaved, covariantly functorial Fukaya/Floer theories. Stein manifolds frequently admit coverings by Liouville sectors, which can then be used to study the Fukaya category of the total space. Our first main result in this setup is a local criterion for generating (global) Fukaya categories. One of our goals, using this framework, is to obtain a combinatorial presentation of the Fukaya category of any Stein manifold. This is joint work with John Pardon and Vivek Shende. Nathan Dunfield (UIUC) An SL(2, R) Casson-Lin invariant and applications When M is the exterior of a knot K in the 3-sphere, Lin showed that the signature of K can be viewed as a Casson-style signed count of the SU(2) representations of pi_1(M) where the meridian has trace 0. This was later generalized to the fact that signature function of K on the unit circle counts SU(2) representations as a function of the trace of the meridan. I will define the SL(2, R) analog of these Casson-Lin invariants, and explain how it interacts with the original SU(2) version via a new kind of smooth resolution of the real points of certain SL(2, C) character varieties in which both kinds of representations live. I will use the new invariant to study left-orderability of Dehn fillings on M using the translation extension locus I introduced with Marc Culler, and also give a new proof of a recent theorem of Gordon's on parabolic SL(2, R) representations of two-bridge knot groups. This is joint work with Jake Rasmussen (Cambridge). ### 2016-2017 Caltech Sloan 151 2017-04-17 16:00-18:00 Steven Frankel (Yale University) Calegari's conjecture for quasigeodesic flows We will discuss two kinds of flows on 3-manifolds: quasigeodesic and pseudo-Anosov. Quasigeodesic flows are defined by a tangent condition, that each flowline is coarsely comparable to a geodesic. In contrast, pseudo-Anosov flows are defined by a transverse condition, where the flow contracts and expands the manifold in different directions. When the ambient manifold is hyperbolic, there is a surprising relationship between these apparently disparate classes of flows. We will show that a quasigeodesic flow on a closed hyperbolic 3-manifold has a coarsely contracting-expanding transverse structure, a generalization of the strict transverse contraction-expansion of a pseudo-Anosov flow. This behavior can be seen "at infinity," in terms of a pair of laminar decompositions of a circle, which we use to proof Calegari's conjecture: every quasigeodesic flow on a closed hyperbolic 3-manifold can be deformed into a pseudo-Anosov flow. Duncan McCoy (UT Austin) Characterizing slopes for torus knots We say that p/q is a characterizing slope for a knot K in the 3-sphere if the oriented homeomorphism type of p/q-surgery is sufficient to determine the knot K uniquely. I will discuss the problem of determining which slopes are characterizing for torus knots, paying particular attention to non-integer slopes. This problem is related to the question of which knots in the 3-sphere have Seifert fibered surgeries. USC KAP 245 2017-04-10 16:30-18:30 Julien Paupert (Arizona State) Rank 1 deformations of non-cocompact hyperbolic lattices Let X be a negatively curved symmetric space and Gamma a noncocompact lattice in Isom(X). We show that small, parabolic-preserving deformations of Gamma into the isometry group of any negatively curved symmetric space containing X remain discrete and faithful (the cocompact case is due to Guichard). This applies in particular to a version of Johnson-Millson bending deformations, providing for all n infnitely many noncocompact lattices in SO(n,1) which admit discrete and faithful deformations into SU(n,1). We also produce deformations of the figure-8 knot group into SU(3,1), not of bending type, to which the result applies.This is joint work with Sam Ballas and Pierre Will. Oleg Lazarev (Stanford University) Contact manifolds with flexible fillings In this talk, I will show that all flexible Weinstein fillings of a given contact manifold have isomorphic integral cohomology. As an application, in dimension at least 5 any almost contact class that has an almost Weinstein filling has infinitely many exotic contact structures. Using similar methods, I will also construct the first known infinite family of almost symplectomorphic Weinstein domains whose contact boundaries are not contactomorphic. These results are proven by studying Reeb chords of loose Legendrians and positive symplectic homology. UCLA MS 6627 2017-03-13 16:00-18:00 Mark Hughes (Brigham Young University) Neural networks and knot theory In recent years neural networks have received a great deal of attention due to their remarkable ability to detect subtle and very complex patterns in large data sets. They have become an important machine learning tool and have been used extensively in many fields, including computer vision, fraud detection, artificial intelligence, and financial modeling. Knots in 3-space and their associated invariants provide a rich data set (with many unanswered questions) on which to apply these techniques. In this talk I will describe neural networks, and outline how they can be applied to the study of knots in 3-space. Indeed, these networks can be applied to answer a number of algebraic and geometric problems involving knots and their invariants. I will also outline how neural networks can be used together with techniques from reinforcement learning to construct explicit examples of slice and ribbon surfaces for certain knots. John Etnyre (Georgia Tech) Embeddings of contact manifolds I will discuss recent results concerning embeddings and isotopies of one contact manifold into another. Such embeddings should be thought of as generalizations of transverse knots in 3-dimensional contact manifolds (where they have been instrumental in the development of our understanding of contact geometry). I will mainly focus on embeddings of contact 3-manifolds into contact 5-manifolds. In this talk I will discuss joint work with Ryo Furukawa aimed at using braiding techniques to study contact embeddings. Braided embeddings give an explicit way to represent some (maybe all) smooth embeddings and should be useful in computing various invariants. If time permits I will also discuss other methods for embedding and constructions one may perform on contact submanifolds. UCLA MS 5127 2016-11-07 16:00-18:00 Burak Ozbagci (Koc University) Fillings of unit cotangent bundles of nonorientable surfaces We prove that any minimal weak symplectic filling of the canonical contact structure on the unit cotangent bundle of a nonorientable closed surface other than the real projective plane is s-cobordant rel boundary to the disk cotangent bundle of the surface. If the nonorientable surface is the Klein bottle, then we show that the minimal weak symplectic filling is unique up to homeomorphism. (This is a joint work with Youlin Li.) Matt Hogancamp (USC) Categorical diagonalization and link homology I will discuss joint work with Ben Elias in which we introduce the notion of a diagonalizable functor and give a categorical analogue of the usual minimal polynomial condition for diagonalizability. As our main application we prove that the Rouquier complex associated to the full-twist braid acts diagonalizably on the category of Soergel bimodues. This has important consequences for the triply graded Khovanov-Rozansky link homology, which I will explain. I will conclude by discussing connections with some recent, very exciting work of Gorsky-Negut-Rasmussen, which suggests that categorical diagonalization is the key to understanding a deep (conjectural) connection between Khovanov-Rozansky homology and Hilbert schemes. USC KAP 245 2016-10-31 16:30-18:30 Tian Yang (Stanford University) Volume conjectures for Reshetikhin-Turaev and Turaev-Viro invariants In a joint work with Qingtao Chen we conjecture that, at the root of unity exp(2πi/r) instead of the root exp(πi/r) usually considered, the Turaev-Viro and the Reshetikhin-Turaev invariants of a hyperbolic 3-manifold grow exponentially, with growth rates respectively connected to the hyperbolic and complex volume of the manifold. This reveals an asymptotic behavior of the relevant quantum invariants that is different from that of Witten's invariants (which grow polynomially by the Asymptotic Expansion Conjecture), and may indicate a geometric interpretation of the Reshetikhin-Turaev invariants that is different the SU(2) Chern-Simons gauge theory. Recent progress toward these conjectures will be summarized, including joint work with Renaud Detcherry and Effie Kalfagianni. Kasra Rafi (University of Toronto and MSRI) Caltech Sloan 151 2016-10-17 16:00-18:00 Hongbin Sun (UC Berkeley) NonLERFness of arithmetic hyperbolic manifold groups We will show that, for "almost" all arithmetic hyperbolic manifolds with dimension >3, their fundamental groups are not LERF. The main ingredient in the proof is a study of certain graph of groups with hyperbolic 3-manifold groups being the vertex groups. We will also show that a compact irreducible 3-manifold with empty or tori boundary does not support a geometric structure if and only if its fundamental group is not LERF. Sucharit Sarkar (UCLA) Equivariant Floer homology Given a Lie group G acting on a symplectic manifold preserving a pair of Lagrangians setwise, I will describe a construction of G-equivariant Lagrangian Floer homology. This does not require G-equivariant transversality, which allows the construction to be flexible. Time permitting, I will talk about applying this for the O(2)-action on Seidel-Smith's symplectic Khovanov homology. This is joint with Kristen Hendricks and Robert Lipshitz. ### 2015-2016 USC KAP 414 2016-03-21 16:30-18:30 Nicolas Tholozan (Univ. Luxembourg) Compact quotients of pseudo-Riemannian hyperbolic spaces A pseudo-Riemannian manifold is a manifold where each tangent space is endowed with a quadratic form that is non-degenerate, but not necessarily positive definite. A typical example is the hyperbolic space H(p,q), which is a pseudo-Riemannian manifold of signature (p,q) and constant negative sectional curvature. It is homogeneous, as it admits a transitive isometric action of the Lie group SO(p,q+1). A long standing question is to determine for which values of (p,q) one can find a discrete subgroup of SO(p,q+1) acting properly discontinuously and cocompactly on H(p,q). In this talk I will show that there is no such action when p is odd and q >0. The proof relies on a computation of the volume of the corresponding quotient manifold. The proof also implies that, when p is even, this volume is essentially rational. I will discuss in more details the case of H(2,1) (the 3-dimensional anti-de Sitter space), for which compact quotients exist and have been described by work of Kulkarni-Raymond and Kassel. Peter Samuelson (University of Iowa) The Homfly skein and elliptic Hall algebras The Homfly skein relations from knot theory can be used to associate an algebra to each (topological) surface. The Hall algebra construction associates an algebra to each smooth (algebraic) curve over a finite field. Using work of Burban and Schiffmann, we show that the skein algebra of the torus is isomorphic to the Hall algebra of an elliptic curve. If time permits we discuss a third (categorical) construction of the same algebra. (Joint with Morton and Licata.) UCLA MS 5127 2016-02-29 16:15-18:30 Eugene Gorsky (UC Davis) Heegaard Floer homology of some L-space links A link is called an L-space link if all sufficiently large surgeries along it are L-spaces. It is well known that the Heegaard Floer homology of L-space knots have rank 0 or 1 at each Alexander grading. However, for L-space links with many components the homology usually has bigger ranks and a rich structure. I will describe the homology for algebraic and cable links, following joint works with Jen Hom and Andras Nemethi. In particular, for algebraic links I will construct explicit topological spaces with homology isomorphic to link Floer homology. Sheel Ganatra (Stanford University) Automatically generating Fukaya categories and computing quantum cohomology Suppose one has determined the Floer theory algebra of a finite non-empty collection of Lagrangians in a Calabi-Yau manifold. I will explain that, if the resulting algebra satisfies a finiteness condition called homological smoothness, then the collection automatically split-generates the Fukaya category. In addition, the Hochschild invariants of the algebra (and hence of the whole Fukaya category) are automatically isomorphic to the quantum cohomology ring. This result immediately extends to the setting of monotone/non-Calabi-Yau symplectic manifolds, under an additional hypothesis on the rank of the algebra???s 0th Hochschild cohomology. The proofs make large use of joint work with Perutz and Sheridan, which in turn is part of a further story about recovering Gromov-Witten invariants from the Fukaya category. Caltech Sloan 153 2016-02-08 16:00-18:00 Anna Wienhard (University of Heidelberg) Maximal representations and projective structures on iterated sphere bundles The Toledo number is a numerical invariant associated to representations of fundamental groups of surfaces into Lie groups of Hermitian type. Maximal representations are those representations for which the Toledo number is maximal. They form connected components of the representation variety. In the case when the Lie group is SL(2,R)= Sp(2,R) they correspond precisely to holonomy representations of hyperbolic structures. Maximal representations into the symplectic group Sp(2n,R) generalize this situation with a lot of new features appearing. I will describe some of these new features and explain how maximal representations arise as homonym representations of projective structures on iterated sphere bundles over surfaces. Shicheng Wang (Peking University) Chern--Simons theory, surface separability, representation volumes, and dominations of 3-manifolds The talk will start with mapping degree sets and simplicial volumes. We then discuss recent results on virtual representation volumes and on virtual dominations of 3-manifolds, as well as their relations. Time permitted, we may end with the high dimensional applications of representation volumes. This is joint work with P. Derbez, Y. Liu and H. Sun. UCLA MS 6229 2015-11-30 16:00-18:00 Ailsa Keating (Columbia University) Higher-dimensional Dehn twists and symplectic mapping class groups Given a Lagrangian sphere S in a symplectic manifold M of any dimension, one can associate to it a symplectomorphism of M, the Dehn twist about S. This generalises the classical two-dimensional notion. These higher-dimensional Dehn twists naturally give elements of the symplectic mapping class group of M, i.e. $\pi_0 (Symp (M))$. The goal of the talk is to present parallels between properties of Dehn twists in dimension 2 and in higher dimensions, with an emphasis on relations in the mapping class group. Hiro Lee Tanaka (Harvard University) Factorization homology and topological field theories This is joint work with David Ayala and John Francis. Factorization homology is a way to construct invariants of manifolds out of some algebraic data. Examples so far include singular homology, intersection homology, Bartlett's spin net formalism for Turaev-Viro invariants, Reshetikhin-Turaev invariants for framed knots, and Salvatore's non-Abelian Poincare Duality. It has also been used by Ayala-Francis to prove the cobordism hypothesis. In this talk we'll give some basic examples and prove some classification results akin to Brown Representability. Caltech Sloan 151 2015-11-16 16:00-18:00 Mike Hill (UCLA) A higher-height lift of Rohlin's Theorem: on \eta^3 Rohlin's theorem on the signature of Spin 4-manifolds can be restated in terms of the connection between real and complex K-theory given by homotopy fixed points. This comes from a bordism result about Real manifolds versus unoriented manifolds, which in turn, comes from a C_2-equivariant story . I'll describe a surprising analogue of this for larger cyclic 2 groups, showing that the element eta cubed is never detected! In particular, for any bordism theory orienting these generalizations of Real manifolds, the three torus is always a boundary. Joshua Greene (Boston College) Definite surfaces and alternating links I will describe a characterization of alternating links in terms intrinsic to the link complement and derive some consequences of it, including new proofs of some of Tait's conjectures. USC KAP 245 2015-10-19 16:30-18:30 Jeff Danciger (UT Austin) Convex projective structures on non-hyperbolic three-manifolds We discuss a program underway to determine which closed three-manifolds admit convex real projective structures and its implications in the search for low-dimensional matrix representations of three-manifold groups. While every hyperbolic structure is a convex projective structure, examples of convex projective structures on non-hyperbolic three-manifolds were found only recently by Benoist. We produce a large source of new examples, including the doubles of many hyperbolic knot and link complements. The strategy is to suitably deform cusped hyperbolic three-manifolds and then (convexly) glue them together. Joint work with Sam Ballas and Gye-Seon Lee. Faramarz Vafaee (Caltech) L-spaces and rationally fibered knots The main focus of the talk will be on proving fiberedness results for knots in L-spaces with either L-space or S1 x S2 surgeries. Recall that an L-space is defined to be a rational homology three-sphere with the same Heegaard Floer homology as a lens space. We prove that knots in L-spaces with S1 x S2 surgeries are Floer simple and fibered. Moreover, the induced contact structure on the ambient manifold is tight. We also prove that a knot K in an L-space Y with a non-trivial L-space surgery is fibered provided that the orthogonal complement of K with respect to the linking form of Y vanishes. This generalizes the result of Boileau-Boyer-Cebanu-Walsh, in which they assume the knot is primitive. This work is joint with Yi Ni. ### 2014-2015 UCLA MS 6627 2015-04-06 16:00-18:00 Steven Sivek (Princeton University) Augmentations of Legendrian knots and constructible sheaves Given a Legendrian knot in R^3, Shende-Treumann-Zaslow defined a category of constructible sheaves on the plane with singular support controlled by the front projection of the knot. They conjectured that this is equivalent to a category determined by the Legendrian contact homology of the knot, namely Bourgeois-Chantraine's augmentation category. Although this conjecture is false, it does hold if one replaces the augmentation category with a closely related variant. In this talk, I will describe this category and some of its properties and outline the proof of equivalence. This is joint work with Lenny Ng, Dan Rutherford, Vivek Shende, and Eric Zaslow. Hirofumi Sasahira (Nagoya University) Spin structures on Seiberg-Witten moduli spaces We will prove that under a certain condition the moduli space of solutions to the Seiberg-Witten equations on a 4-manifold has a canonical spin structure. The spin bordism class of the moduli space is a differential topological invariant of the 4-manifold. We will show that this invariant is nontrivial for the connected sum of some symplectic 4-manifolds. UCLA MS 5127 2014-11-17 16:00-18:00 David Rose (USC) Annular Khovanov homology via trace decategorification We'll review work of the speaker, joint with Lauda and Queffelec, relating Khovanov(-Rozansky) homology to categorified quantum sl_m via categorical skew Howe duality. We'll then discuss work in progress (joint with Queffelec) showing how to obtain annular Khovanov homology from this "skew Howe 2-functor" via trace decategorification. This provides a conceptual basis for this invariant, and in particular explains the recent discovery of Grigsby-Licata-Wehrli that the annular Khovanov homology of a link carries an action of sl_2. Our framework extends to give the first construction of sl_n annular Khovanov-Rozansky homology (which carries an action of sl_n), and should lead to a proof of a conjecture of Auroux-Grigsby-Wehrli relating annular Khovanov homology to the Hochschild homology of endomorphism algebras in category O. Liam Watson (University of Glasgow) A categorified view of the Alexander invariant Alexander invariants are classical objects in low-dimensional topology stemming from a natural module structure on the homology of the universal abelian cover. This is the natural setting in which to define the Alexander polynomial of a knot, for example, and given that this polynomial arises as graded Euler characteristic in knot Floer homology, it is natural to ask if there is a Floer-theoretic counterpart to the Alexander invariant. There is: This talk will describe a TQFT due to Donaldson, explain how it is categorified by bordered Heegaard Floer homology, and from this place the Alexander invariant in a Heegaard Floer setting. This is joint work with Jen Hom and Tye Lidman. Caltech Sloan 151 2014-11-17 16:00-18:00 Boris Coskunuzer (Koc University and MIT) Minimal Surfaces with Arbitrary Topology in H^2xR In this talk, we show that any open orientable surface can be embedded in H^2xR as a complete area minimizing surface. Furthermore, we will discuss the asymptotic Plateau problem in H^2xR, and give a fairly complete solution. Ina Petkova (Rice University) Combinatorial tangle Floer homology In joint work with Vera Vertesi, we extend the functoriality in Heegaard Floer homology by defining a Heegaard Floer invariant for tangles which satisfies a nice gluing formula. We will discuss the construction of this combinatorial invariant for tangles in S^3, D^3, and I x S^2. The special case of S^3 gives back a stabilized version of knot Floer homology. USC KAP 414 2014-11-03 16:00-18:00 Anna Wienhard (Heidelberg and Caltech) Anosov representations and proper actions When M is a Riemannian manifold, a discrete subgroup of isometries acts properly on M. This is not true for semi-Riemannian manifolds. For a homogeneous space there is criterion, due to Benoist and Kobayashi, which describes when the action of a discrete subgroup of isometries is proper. In this talk I will explain a connection between Anosov representations and proper actions on homogeneous spaces, which relies on a new characterization of Anosov representations. As an application, for a fixed convex cocompact subgroup G' of a Lie group G of rank one, one gets a precise description of the set of proper actions of G' on the group G by left and right multiplication. This is joint work with Francois Gueritaud, Olivier Guichard, and Fanny Kassel. Jeremy Toulisse (University of Luxembourg) Minimal maps between hyperbolic surfaces, and anti-de Sitter geometry Around 1990, Geoff Mess discovered deep connections between 3-dimensional anti-de Sitter (AdS) geometry and the theory of hyperbolic surfaces. These ideas were further expanded by Schoen, Labourie, Schlenker, Krasnov and others to establish an equivalence between minimal Lagrangian diffeomorphisms between hyperbolic surfaces and maximal surfaces in AdS space-time. We will explain this connection, and extend it to manifolds with conical singularities.
2020-07-09 02:38:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7368532419204712, "perplexity": 759.4899612442144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897844.44/warc/CC-MAIN-20200709002952-20200709032952-00368.warc.gz"}
https://flashman.neocities.org/Presentations/MathFest2013Conics/knowls/proof.DZ.knowl.html
Preface: C is an parabola if and only if C  has exactly one ideal point. Proof: Suppose $\Delta = 0$. We examine $f(x,y,z)= Ax^2 + Bxy+Cy^2 +Dxz+Eyz+Fz^2 = 0$ when $z = 0$. So ... $f(x,y,0) = Ax^2 + Bxy+Cy^2 = 0$ (*). • Case 1: $A = 0$. Then $B = 0$ [$\Delta = B^2-4AC= 0$] and $C \ne 0$, so $y = 0$. Thus C   has only the one ideal point < $1,0,0$ >. • Case 2: $A \ne 0$. Then solving the quadratic (*) for $x$ in terms of $y$ we obtain $x= \frac{-B}{2A}y$. [$\Delta = 0$.]. Thus has exactly the single ideal  point with homogeneous coordinate, <$B,-2A,0$> . In any case, C   has exactly one ideal point and so C   is a parabola.
2021-08-04 16:10:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9831793308258057, "perplexity": 1681.8666844120423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154878.27/warc/CC-MAIN-20210804142918-20210804172918-00487.warc.gz"}
https://matheducators.stackexchange.com/questions/15212/the-concept-of-infinity-for-a-5-year-old/15216#15216
# The concept of infinity for a 5 year old My son, who just turned 5, has been interested in the concept of infinity since long. He asks me a lot of questions regarding infinity. For example, not accepting my infinity + any number = infinity, he asked me how old I will be when he himself becomes infinity years old. How should I explain to him this concept that resonates with his existing understanding of mathematics. If it matters, he knows how to add and subtract very large numbers, knows about negative numbers and has figured out tables of any number less than 100. UPDATE: He derived one possible answer a few minutes after I typed the question on this forum. So I have written it as an answer below instead of a comment. Along with many other useful answers and comments here, this will be helpful to future questioners searching for the same to satisfy their children. Furthermore, it would be interesting for some to look at the questioning child's own thinking process. • he asked me how old I will be when he himself becomes infinity years old. This might be a perfect opportunity to talk about your (and his) mortality. Feb 13 '19 at 2:22 • Let me also encourage you to continue giving deep, correct answers to your son. Mine is 12 now and I have always tried to answer all questions he had to the best of my knowledge, without undue simplifications. This often was, of course, a learning experience for both of us, and thus a great common experience. Many advanced concepts are more accessible to children than one would think, sometimes more than to adults who have to overcome wrong preconceptions (like, "the electron is a round solid ball with a charge"). Feb 13 '19 at 12:50 • "infinity + any number = infinity" is blatant fallacy and sure to cause undue confusion. It's the same problem with saying "Infinity is the highest number there is", it's a gross misrepresentation of the concept. Feb 13 '19 at 16:42 • @NickC As important as understanding mortality is, I'm not sure the best response to "I don't understand math" is "Not to worry, Son. We're all going to die, anyway." – Ray Feb 13 '19 at 16:48 • My kid(4) asked my kid(6) to count down while she completed a task: kid6: "10, 9 .." Kid4: "No no. Start higher!" Kid6: "20, 19.." Kid4: "No. Start at the higest number!" Kid6: "Infinity, infinity, infinity . . " Feb 14 '19 at 22:34 This does not directly concern the $$\infty+1=\infty$$ issue and I am not certain that I understand what you mean by his previous understanding of mathematics, but I wanted to give the following suggestion: 1. Ask your child to name the biggest number he knows (besides $$\infty$$). (Let's say he answers $$1000$$); 2. Tell him to add $$1$$ to it; 3. Ask him again what is the biggest number he knows. (It should be $$1001$$). Repeat the process a few times and he should realize at some point that he can do this indefinitely. He can just keep on adding $$1$$ for free. It doesn't matter if he can't name the numbers eventually, as long as he understands that the next number is one more than the previous one. While this does not necessarily show the various types of infinities that might exist, I think the idea that you can "keep on going" is a fair definition of infinity for a 5 year-old. It's not too hard to understand and it illustrates that infinity is not a number like $$1$$, $$2$$ and $$3$$, but rather an idea: infinity is what you get if you keep on going forever. It is certainly better than the belief I had as a child that infinity was the biggest number; there's no such thing as the biggest if you can keep on adding $$1$$. I hope this helps in some way! Edit: As requested, I should mention that I haven't had the chance to test this with a 5 year-old, but it did work with teenagers (12-16) who had the same question (what is infinity?), as they seemed satisfied with the answer. I also reiterate that this approach does not treat all types of infinity and should sound quite incomplete to a mathematician. However, there must be some limit to what we can and can't explain to a 5 year-old without sacrificing rigor and without "burning" them out. (Also, they'll have the opportunity to improve their understanding of the concept as they grow up). This particular approach seems to me to be "viable" for a 5 year-old (especially one who "knows negative numbers" and "has figured out tables of any number less than 100", like OP's child). In particular, this approach involves the child in his own learning: he should be the one to realize on his own, inductively, what infinity is. This is much more convincing than "being told" what infinity is and should help with the "resistance" issue OP had. • To me as a programmer $\infty+1=\infty$ results in a type error because + is not defined for a left hand operand of $\infty$ and a right hand operand of a natural number; or if it is defined, it is overloaded for these types and does something different than the other overloads. The important thing is that $\infty$ is, from my programmer's perspective, "not a number" (literally a NaN ;-) ), and you cannot naively use it like a number. Feb 13 '19 at 11:26 • @Peter A. Schneider: Exactly what I think also. Thus, the issue is not "what is $\infty + 1,$" but rather (if the questioner persists along this line of reasoning) "what might be a reasonable way to define what we mean by adding $1$ and $\infty$". For example, we can talk about mixing colors, such as mixing red paint and blue paint, and thus think of this as adding "red" and "blue", but what might we mean by adding "red" and $1$"? Is this even a useful path of inquiry? FYI, the notion of addition of cardinal numbers is simply one way of going about this, not THE way (as useful as it is). Feb 13 '19 at 11:59 • In IEEE floating-point arithmetic, $\infty+1=\infty$. :) Feb 13 '19 at 13:18 • @PeterA.Schneider: I guess you don't work with floating-point very often? std::numeric_limits<double>::infinity() is a perfectly valid double in C++, or C +INFINITY. You only get a NaN if you do inf - inf, or inf / inf. inf + 1, inf + inf, inf - 1 are all not errors and give you inf. 1.0/inf evaluates to 0. (godbolt.org/z/oPWwzc) This is somewhat questionable, but it's considered useful to make stuff like 1/(1/x + 1/y) "work" even for x=0 or overflow in the sum. Feb 13 '19 at 13:48 • I appreciate the comments relative to the programming point of view; it hadn't come to mind when I wrote the post. I would just like to reiterate that the presentation I proposed does not take into account everything about infinity. (There are many types of infinities, and ways to think about it; a 5 year-old does not need to know them all. I chose one I thought was simple enough). I may actually have presented the induction principle, rather than infinity, but I think it can give a glimpse of what infinity "looks like". Feb 13 '19 at 15:23 First of all, regardless of age, people need to understand that "infinity" is not a number, and not a placeholder for a number, but an attribute of them (i.e. the fact that you can increase numbers without ever getting to an end). For my children, the concept somehow came into their mind all alone due to the book "Guess How Much I Love You" by Sam McBratney. It plays with ever bigger numbers, and the kids can easily increase the distances used in the book on their own as soon as they learn that, for example, the sun is farther away than the moon, that stars are even farther, etc.. They had to increas their number, because otherwise I would increase mine, and "win" the contest laid out in the book. At some age (I cannot remember if it was 5 or older), the kids figured out that they can make numbers ever larger - even larger than in the book - by adding or multiplying (doubling as in "there and back again"), or whatever operation they learned in school. At that point, the concept of infinity seems to be represented by - literally - "in-finite", or "non-ending". I.e., they understand that there is no end to numbers, that you can go on and on adding ever more of them. • I had the same discussion with my daughter about this book. However, at one point, she said that she loved me as much as everything that exists. This was kind of the end of it. You cannot really go +1 beyond everything that exists, or can you maybe? Feb 13 '19 at 12:34 • We stuck with distances, and got way past sleeping time whenever we really fought it out (the "... and back again" cop-out can be multiplied ad nauseam). :-) @Trilarion – AnoE Feb 13 '19 at 12:38 • @Trilarion Physically, likely no. Mathematically, sure. Feb 13 '19 at 15:47 • Infinity does seem to sometimes be used as a placeholder for a number, like in the notation for a sum to infinity. – caf Feb 15 '19 at 13:30 • >that "infinity" is not a number. In the extended arithmetic, it is. Feb 17 '19 at 22:05 On a piece of paper, he started with writing 10, then 100, then 1000, .... and he stopped after writing 40 zeros with 1. Then he came to me and said, "I understand infinity now; infinity is a number with infinite zeros." The main point is that as most of you suggested, he has now registered infinity in his brain as a concept rather than a number, which is why he used the expression 'a number with infinite zeros'. • This is seriously one of the best things I've seen on stack exchange :D – Esco Feb 17 '19 at 12:10 I'm not sure why the two basic things adults seem to say about infinity are "infinity is not a number" and "∞+1=∞", both of which are at best misleading. (Infinity doesn't name a number, but it does refer to a property some numbers can have. ∞+1 is nonsense, $$\aleph_0+1=\aleph_0$$, and $$\omega+1\neq\omega$$.) The problem with talking about infinity with small children is that it's an imprecise term that covers multiple precise ideas that behave differently. Children that age aren't ready for that, so I don't think there are any particular concepts about infinity that it's useful to convey. I certainly don't think there's any reason to prioritize explaining the non-fact that ∞+1=∞ over other non-facts (say, that ∞+1>∞). Rather, I think the goal has to be to enjoy playing around with the concepts. Your son's question - how old you'll be when he turns infinity - is a really good question, and points to the difficulty of dealing with the question: it's not at all clear what the right number system to discuss that in is. • @Tim: Impossibility is not the same as meaninglessness; the fact that it would happen in the real world doesn't stop us from considering number systems in which it is sensible. But my point was that his son was making the right move, away from something underspecified ("what is addition with infinity") and towards something closer to being sufficiently specified. Feb 16 '19 at 18:33 • Actually, $\infty+1=\infty$ is not nonsense. It's not that $\infty$ is an umbrella term that covers both $\aleph_0$ and $\omega$, and one satisfies $x+1=x$ but the other doesn't. In fact $\infty$ represents a third concept, which unlike both of the above examples solves $2^x=x$. It also has an additive inverse $-\infty\ne\infty$. See here. (Oh, and since things get ever more complicated, see this point at infinity, which instead satisfies $-\infty=\infty$.) – J.G. Feb 17 '19 at 21:52 • @J.G. You are correct that the there are other notions of infinity besides the cardinals and ordinals, and that one of those happens to also use the symbol ∞. (Confusingly, your first link is to the wrong topic - the extended real line, which uses +∞ and -∞, not the unadorned ∞ symbol.) However context establishes that I was using ∞ in the conventional way as the undifferentiated infinity, for which "∞+1=∞" is nonsense; the fact that there are subfields which do use that notation does not cause ∞ to exclusively refer to that subfield. Feb 18 '19 at 3:04 • @HenryTowsner My link was fine: $\infty$ is an abbreviation for $+\infty$. – J.G. Feb 18 '19 at 6:02 • Son, there is this thing called ℵ0 you need to understand first... Feb 19 '19 at 6:37 Speaking as someone who was that kid, you might be able to explain $$\infty + 1 = \infty$$ via the Hilbert hotel. Imagine a hotel that has an infinite number of rooms, one for every number. Imagine the hotel's full, and another guest shows up. You can make room for that guest by having the guest in room 1 move to room 2, the guest in room 2 move to room 3, and so on. So even though it's full, you can always fit more guests in, so you can always add one. He's also not totally wrong to insist that $$\infty + 1 \neq \infty$$. Infinity isn't really number, but an idea, that you can apply in different ways. There are different types of infinite numbers, with different rules, and in some of them, he'd be quite right that $$\infty + 1 \neq \infty$$. The hotel doesn't have a room $$\infty$$, but if it did, the room next to it would be room $$\infty + 1$$, and would be a different room. Five might be a bit young to understand the idea that different rules lead to different maths, but he can probably grasp the idea the idea that there are different types of infinite numbers, and it's good to tell him that he's not wrong. • Young children will find this neat, but of course from a strict logical point of view this idea assumes that $\infty$ has meaning (or at least some of the meanings) we attach to the idea of a cardinal number. If we're considering ordinal numbers, then $\infty + 1 \neq \infty.$ And, of course, there are probably other ways of understanding the notion/symbol $\infty$ that result in different interpretations of what $\infty + 1$ could mean. I'm not saying that the different interpretations are useful or have even been studied. My point is that you need an interpretation before going anywhere. Feb 13 '19 at 12:10 • @DaveLRenfro Yes, I think I was trying to convey that. The more I think about what I wrote, the more I realise that the key point is that the rules of maths aren't set in stone, and there are different types of infinte numbers with different rules and different interpretations. I'll see if I can reword it to change the emphasis. Feb 14 '19 at 12:02 • At youtube.com/watch?v=Uj3_KqkI9Zo there's a TED-ed animation of the hotel. Feb 14 '19 at 16:14 • And there's a lovely picture book that uses the Hilbert Hotel idea: The Cat in Numberland by Ivar Ekeland. (bookfinder.com/search/…) Feb 17 '19 at 5:27 My son, also 6 yo, regularly talks about millions and billions and infinity. Obviously, large numbers have some attraction to children of this age. I try to explain that infinity is not a number. Instead, infinity is an order of magnitude which has its own algebraic rules. Plus, minus, divison and multiplication do not work the way children learn in elementary school when applied to infinity. My first explanation is that this has also an impact on how we use the words infinity and infinite: Three meters. (works) *Infinity meters. (completely wrong) *Infinite meters. (sounds wrong) Infinitely many meters. (works) Another approach is that the concept of numbers does not work. Numbers grow. For every number, there is another number that is larger. The mathematical notation to this concept is $$\forall n \in \mathbb{N}: \exists m \in \mathbb{N}: m > n$$. If infinity was a number, then this statement would be false, because let $$n=\infty$$, then $$\infty + m > \infty$$ is false for all $$m \in \mathbb{N}$$ where $$m > 0$$. Surprisingly, children who already learnt addition upto, say, 100, understand this. They understand that 100 is not the largest of all numbers, neither is 1000, neither is a million, and so on. But the fact, that addition does not alter the "number", makes them understand that infinity is not a number. In words that are better suited for children, you can also say: Adding any number to infinity does not change its size, because infinity expresses a magnitude, a size, rather than a number. Admittedly, my daughter, 8 yo, understands this point better, because my 6 yo son has not yet learnt addition of numbers larger than his 10 fingers provide. • The Phantom Tollbooth by Norton Juster. The book is a rather surreal adventure trip through a Wonderland-style setting populated by mad grammarians and mathemagical wizards (among others). There is a section somewhere in the middle where the protagonist is sent on a quest to infinity, which is described in a couple of ways. The concept of infinity discussed here is, if I recall correctly, more akin to the infinity that occurs when you compactify the the real numbers, so it isn't quite the same idea as $$\infty + 1 = \infty$$. In any event, the book is quite a lot of fun. Even if you aren't overly concerned with infinity, it is worth reading, and should be at about the level of a 5 year old (maybe a little advanced? I seem to recall having it read to me when I was in first or second grade, so maybe just a little older?). • The Cat in Numberland by Ivar Ekeland. This book might be a little advanced for a 5 year old, but maybe not–I think that it is intended for 3rd or 4th graders, but might be accessible with the help of a parent. In any event, the book deals with infinity as a cardinal. There is the basic "a new number arrives, everyone moves up a room" example, but my recollection is that the correspondence between rational numbers and the natural numbers is discussed, as well. I had difficulty getting a copy of the book several years ago, but it appears that it might be back in print(?). • Just to add on - I'd also recommend the Number Devil, by Hans Magnus. Also may be too advanced (it's aimed at elementary school students, as I recall, and I think I read it when I was 7) but it has some excellent stuff on infinity, including a very good treatment of Hilbert's Hotel-style ideas. Feb 13 '19 at 15:41 • I haven't read the Number Devil, though it is one that I have seen recommended several times before. I should probably pick up a copy of it... Feb 13 '19 at 15:41 • I literally came to this page from HNQ, did Ctrl+F for Phantom Tollbooth, and happily upvoted this. Note there is a quite well-done cartoon movie for it, as well. I enjoyed that quite a bit as a kid. Feb 15 '19 at 2:41 • My favorite line from The Phantom Tollbooth is the directions on how to reach the land of Infinity: "Go down this hallway forever, then turn left..." Feb 18 '19 at 5:48 There is a well-known Christian hymn, Amazing Grace, whose last lyric captures the idea of (countable) infinity quite well, and may be more effective to a five year old because it includes a context in which the notion of infinity can be applied. The lyrics goes: When we’ve been there ten thousand years, Bright shining as the sun, We’ve no less days to sing God’s praise Than when we'd first begun. Obviously, this brings issues of religion into the matter. If you don't want a Christian hymn, you might rephrase the lyric in the framework of another religion or atheistically. If the child can grasp the notion of an infinite number of future years, then this lyric is "explaining" that $$10,000 + \infty = \infty$$. • That's a good example. Though theologically I'd want to point out that it treats eternity as an infinitely long time dimension rather than as being outside of time and space . . . (Which makes it good mathematically, though.) Feb 13 '19 at 18:28 I would start by saying something along the following lines... "You're asking some very grown-up questions for someone that's only 5. Are you ready to do some really, really, grown-up thinking about the answers?" He will of course, answer, 'yes'. I would respond... 'Ok, but this is serious stuff. You need to be ready to take this thinking very seriously.' Only when you really have his attention, do all the 'infinity is not a number' stuff, and all the other things that people have suggested. A 5-year old, thinking about infinity, if he's really thinking about infinity, is a very smart kid indeed. He is probably smart enough to confront the idea of types of thinking beyond what he's encountered. But he doesn't yet realise how he needs to change gear. Teach him that, before you teach him about infinity, and 20 years from now, he'll thank you for all the other things that enabled him to do. • How has this approach worked for you? Feb 14 '19 at 10:45 I suppose one problem is that your son looks at $$\infty$$ the same way he looks at $$10$$. But infinity is not a natural or real number, even though it has a symbol and can be used in "equations" like $$\infty + 1 = \infty$$. These equations do not have the same meaning and do not follow the same rules as with "normal" numbers — the reason being that at least one operand isn't one. Making clear that infinity is not a single number but is used to describe the unboundedness of number sequences should go a long way towards understanding and is totally within reach to a 5 year old. Let me indulge in my computer science perspective. (I suppose it is correct in purely mathematical ways as well. Please correct me if that is not so.) How do we use "infinity" in math? For example we say that the result of some sum is infinite: $$\sum_{i=1}^\infty{f(i)} = \infty$$ for some $$f$$. Similarly for some integral, or simply the value of some function $$f(x)$$ when $$x$$ approaches a certain value. The essence is that we make statements of the outcome of procedures. $$\infty$$ is not a static "value"; it is a statement of what happens to a value which is computed procedurally, provided we never stop. Specifically, it is the statement that we cannot name a limit that this value will not exceed. In other words, infinity can be considered the opposite of a static, fixed value. Making this crucial distinction will go a long way explaining why $$\infty + 1 = \infty$$ holds (even if I don't like writing it at all, as mentioned in a comment elsewhere in this thread): If I cannot put an upper threshold to a result, incrementing the result will still not yield an upper threshold, and that's all that $$\infty$$ means. • @PeterCordes My point is that $\infty$ is emphatically not a value ;-). It is a statement about the properties of a series of values. Feb 13 '19 at 16:01 • @PeterCordes Not necessarily. As Henry Towsner explains in his answer, we need to define what infinity we're talking about before we can say whether those are equal. If it's an ordinal, then you can add one to it and get a different infinite ordinal. If it's a cardinal, you can add an element to the set that it's the cardinality of and get the exact same cardinal infinite. – Ray Feb 13 '19 at 16:12 • @TommiBrander What is formally correct should solve the understanding issues ;-). I specifically address the problems of the child mentioned in the question: "not accepting my infinity + any number = infinity". I could make that clearer. Feb 14 '19 at 10:45 • Unfortunately, formal correctness and understanding do not always go hand in hand in mathematics. Feb 14 '19 at 10:48 • @TommiBrander If you think of formal correctness not in terms of notation and other formalisms; but instead in terms of the essence of the used constructs and relations; then I believe the opposite is true. Formal correctness (at least, the absence of formal incorrectness, like "infinity is like 10") is a necessary prerequisite for understanding. Infinity and related concepts were for most of history not properly understood by the brightest minds on earth, until strict correctness was achieved. Often that correct view is simpler than the muddled and unclear previous attempts. Feb 14 '19 at 10:58 My children both learned about infinity at around four to five years old (now 5 and 7). For both of them it was fairly straightforward; it came about with my eldest when he was talking to other kids at school about the biggest number. We talked about trillion, quadrillion, etc.; as they were at a Montessori, it was easy to understand these. Then we talked about googol, and googolplex. Then we talked about a few other numbers, like Graham's number, and the idea of other extremely large numbers. Finally, we talked about infinity. Because we were talking in the context of very large numbers, the first thing we learned - before anything else - was that infinity is a concept, not a number. A way of thinking about extremely, impossibly large numbers, without actually naming one. This brought a little confusion, until we went through the thought exercise of: "What's the largest number. Okay, add one to it." But ultimately they got the idea of 'concept' pretty easily. Now, at 5 (almost 6) and 7, they get some of the other ideas pretty easily - like 1/0 approaches infinity, infinty/n is infinity, but 0/0 = undefined. Teaching infinity as a concept made it easy for them to understand these are basically just rules to follow, and that it's not the same as a number. Of course, I had a lot of sympathy that week for the poor preschool teacher who had to deal with the arguments 'infinty is the biggest number' 'no, it's a concept, not a number' between my children and the other children... • I'm not particularly familiar with this site's standards, so perhaps this isn't in line with what you expect (sorry if so), but typically showing how one taught something to a similar child is an answer to how should I teach my child? – Joe Feb 13 '19 at 16:55 • It's an answer. It shares an approach that's been actually used with actual children, and the results it had. Not all the answers do this (mine included). Feb 13 '19 at 18:44 • $1/0=\infty$ is not even undefined. It's sheer nonsense. At best, it leads to chaos, or trivia. Feb 14 '19 at 17:53 • @TommiBrander Yeah, fixed that, thanks for the correction. Everyone else, I think teaching limits is a bit over 5yo level, though I do make sure it's clear that things like 1/0 aren't actually infinity, because infinity is not a number, but instead a placeholder for convenience. And the +/- infinity ... will leave that out for now too. – Joe Feb 14 '19 at 18:17 I've no idea whether this would work, but would relating it to forever hekp? Infinity is like forever but for nunbers. Doing something for a week and then forever is the same as just doing it forever. When is forever? That's when you'd be "infinity years old", but there isn't a when because forever means "never stop" . . . So if you're travelling to infinity, which is travelling forever, when do you get there? When you stop. When do you stop? Never! Because, forever says never stop. But maybe prepare for questions about whether infinity really exists . . . I think forever a much more familiar concept to a five-year-old than infinity, so maybe you can start from that. • PS I think I was about $8$ when my father explained involute gears to me in terms of a piece of string with a knot in it, and I followed that, so I don't think you should give up just because an idea is quite advanced. Though I was $8$, not $5$. Feb 13 '19 at 15:16 Before trying to explain $$\infty + 1$$, it'll help if he has an intuitive grasp of what infinity means. I recommend using the cardinals rather than the ordinals, because they can be constructed without needing to understand limits. Below is a possible explanation that may help with that; I try to avoid using too much terminology, since he's 5, and in particular, when I say "number", I mean "non-negative integer" and when I say "infinity", I mean $$\aleph_0$$. You might mention briefly that when you say "numbers" here, you mean numbers like "3", but not numbers like "3 and a half". Start with a set with just one number in it: $$\{1\}$$. We then add numbers to it one at a time, and we'll do it so that that the biggest number in the set is also how many numbers are in the set. So right now, there's $$1$$ number in the set, and $$1$$ is the biggest number in the set. Next, we'll add the number that's one bigger than the biggest number in the set. $$1+1=2$$, so now our set is $$\{1,2\}$$ and the biggest number in it is $$2$$. We continue as such. $$2+1=3$$, $$3+1=4$$, and so on. So for any number, we have the corresponding set containing every number from 1 to that one, and our number is how many numbers are in that set. No matter how many numbers we add, there's always a biggest number in the set and that number is always how many numbers there are in the set. But there's also always a number that's one bigger than that one. So we can keep adding numbers to the set forever and never get all of them. There is no biggest number, and there are an unlimited number of numbers; no matter how many we have, we can always add one more. So if we want to ask "how many numbers are there altogether", we need a word for that that isn't a number. That's what infinity is. So $$\infty + 1$$ doesn't mean anything, because $$\infty$$ just means "unlimited"; it's specifically the thing we can't get to by adding 1 to numbers we already have. There are a few other kinds of infinity, but this is the easiest one to understand, so you want to make sure you really understand this kind before moving on to the others. • This is basically matheducators.stackexchange.com/a/15215/667 with sets. I don't think that sets are a helpful concept for a 5-year-old (also, for almost no 5-year-old, "3 and a half" will be a number). Feb 13 '19 at 17:43 • @Jasper You can call them groups of numbers or collections of numbers if that helps. The idea I want to emphasize here is that we move from viewing infinity as a number (even one bigger than any other) to infinity as an answer to the question, "How many?", along with an understanding that there is no number that's bigger than every other number. I wouldn't have thought that infinity would be a helpful concept for a 5 year old, but if Qasim thinks he's up to it, this is the approach I'd suggest. (And perhaps he'll consider non-integers to be numbers once he's 5 and a half years old.) – Ray Feb 13 '19 at 18:21 • +1 especially for "numbers like "3", but not numbers like "3 and a half"" which is the right kind of "definition" for an audience below a certain age. Feb 13 '19 at 21:35 • @TommiBrander I've never tried explaining infinity to a 5 year old before, I'm afraid. Similar approaches (albeit ones using a much more technical vocabulary) have certainly worked when explaining infinity to young adults who only had a similar level of understanding as the OP's son seems to. I think this explanation is worth trying, but can offer no guarantees. – Ray Feb 14 '19 at 16:56 • Adding that into the post would improve it, I feel, since answers should be backed up by experience or reliable sources. Feb 15 '19 at 9:06 Have you considered geometry? Take two points and draw lines, one through each points. • Two lines meeting at an obtuse angle meet "soon," that is a small number. • Two lines meeting at an acute angle meet "far away," that is a large number. • Two parallel lines meet "infinitely far away." In a way that gets you to limits, which may be too complicated. Note that this is an Alexandroff Extension or Riemann Sphere, depending on how you look at it. With slightly older students, the Riemann sphere is a good way to "visualize" infinity. • Where the two lines meet, there will be both acute and obtuse angles. Just leave out the angle part. Feb 17 '19 at 5:37 • @SueVanHattum, I'm thinking of two points, two rulers or pencils. Or perhaps two points, one line on a graph paper, and a ruler. Change the angle from perpendicular to parallel and the angles from 90° to 0° become distances from zero to infinity. – o.m. Feb 17 '19 at 5:48 • Yep. Obtuse means greater than 90 degrees. I think you might want close for greater than 45 degrees and far for less than 45 degrees, perhaps. Feb 17 '19 at 20:38 • @SueVanHattum, what I had in mind when I wrote that was adjusting two sets of lines, not keping one constant. Starting with an isoceles triangle with an obtuse angle on top, then "stretching" it step by step. – o.m. Feb 18 '19 at 6:08 • How has this worked when you tried? Feb 18 '19 at 7:44 The thing is, saying that infinity + any number equals infinity is a bit imprecice. There are two widely used concepts of infinity; they refer to cardinality and ordinality. In finite numbers, Cardinality is the concept of "how many" of something there are -- 1 sheep, 2 sheep, 3 sheep. Ordinality is "what order" they come in -- 1st sheep, 2nd sheep, 3rd sheep. With finite numbers they are highly tied to each other. You can just "count labels" in a sense. With infinite sets the two concepts diverge. We'll start with the natural numbers -- the set of all counting numbers. 0, 1, 2, 3 etc. That'll be our "first infinity". If you take the first infinity, and add another element to it, you get the same cardinality. This is what people talk about when they say "infinity+1 equals infinity". More than that, if you take the first infinity, and double it, you get ... the same cardinality. You can even add an infinite number of infinities to it -- take "first infinity times first infinity" or "first infinity squared" -- and you get the same cardinality. It isn't until you reach (assuming continuum hypothesis) 2^"first infinity" that you reach a new cardinal. This can either be described as the "set of all the first infinity" or "the set of functions that go from the first infinity to yes/no" (it shouldn't be hard to see they describe the same thing). This is a bigger cardinal than the first infinity, and is also the same cardinality as the real numbers. The cardinality game continues from there. So that is one branch. The other is ordinals. In ordinals, we talk about ordering things. For any two things, you can say which is in front of the other in the order. And for any collection of things ordered, we can find the "least" element (the one "behind" all the others), including the entire collection of ordered elements (we normally call this element 0). The "first infinity" in ordinals is ordering everything by the natural numbers. Everyone gets a tag that says "1st" or "1 million and 7th" or whatever. Now, in ordinals, we can they have someone with the label "1st in 2nd lineup", and we can state that the 2nd lineup goes after every value in the first lineup. This is "infinity plus 1" in ordinals, and it is a distinctly different way of ordering people. What more you can have 2 infinite lineups (where the 2nd goes after the first), or 3, or an infinite number of infinite lineups (where each lineup goes after the one before). These are all distinct ordinals -- they describe fundamentally different ways of ordering things. And the game continues from there. Now that I have disabused you of the notion that infinity+1 always equals infinity, how do we talk about it with a 5 year old? You could talk about that split. Say "up to infinity the idea of ordering and counting is the same. At infinity they are different." Then talk about infinite ordering and lineups. The ordinal $$\omega$$ is a lineup that goes on forever. There is someone in front. The ordinal $$\omega + 1$$ is two lineups. One that goes on forever, and one with a single person in it. That single person goes after the first lineup. It will get very boring for them. The ordinal $$\omega +2$$ has 2 people in the second lineup. The ordinal $$\omega + \omega = 2 \omega$$ has two infinitely long lineups. The second goes after the first. The ordinal $$k \omega$$ has $$k$$ lineups, each infinitely long. The ordinal $$\omega \omega = \omega^2$$ has an infinite number of lineups, each infinitely long. The ordinal $$\omega^2 + 1$$ has an infinite number of lineups, each infinitely long, plus one person who gets to go after everyone else is done. The ordinal $$\omega^2 + \omega$$ has an infinite number of lineups, each infinitely long, plus another lineup that goes after the previous infinite set of lineups are all done. The ordinal $$2\omega^2$$ has a two collections, each with an infinite number of lineups, each infinitely long, with one going after the other. The ordinal $$\omega \omega^2 = \omega^3$$ has an infinite number of collections, each with an infinite number of lineups, each infinitely long. The ordinal $$\omega^\omega$$ has an infinitely long order of layers. In each layer, there is an infinite number of the next layer, all ordered. You probably will break down before getting this far. Also talk about cardinality. Here the other answers cover things really well -- things like the Hilbert Hotel and the Cat in Numberland are great resources. A fun part of this is that every Cardinality has a whole bunch of Ordinalities associated with it. You can look at the stories, like Hilbert Hotel, and talk about how the Ordinality changed even when the Cardinality didn't. And you can talk about how this doesn't work for "normal" numbers. You cannot change the fundamental ordinality without changing the cardinality. Having two lines, one of 2 people, followed by a line of 3 people, is the ordinal 2+3, which is fundamentally the same as the ordinal 5. You can "just paste" the 3 people onto the end of the first line. With infinite ordinalities, you cannot reach the end of the first line to paste the second line on. It is infinitely far away. • How has this worked when you have tried? Feb 18 '19 at 7:45 If you are 24 years older than him now, the answer can be "I will be 24 years older than you." • How did this explain infinity when you tried using it as an explanation? Feb 18 '19 at 7:45 Imagine you have a snack machine that contains infinite number of candies and gives one to you (for free) every time you press the button. So you have infinite number of candies. Now your friend gives to you a candy of the same kind as a snack machine. Do you have more candies than you had before that? No as instead of your friend's candy you could just press a button and get it from there. The machine will continue to give them to you infinitly. So infinity plus one is still infinity - no profit in extending infinite resource by the same thing from a finite source. • How has this worked in your experience? Feb 18 '19 at 7:46 From what i know, infinity is a fun "number". Let's say it takes one minute to climb on a rock. If you have a tower made of 100 rocks, it takes you 100 minutes to climb on top of it. Now let's say you have a tower made of an infinite number of rocks. It's not just big, it goes higher that the sky, it goes higher than the sun, it never stops going higher than everything that isn't infinite too. If you try to climb on top of it, you'll never achieve to, you'll keep climbing forever, and won't reach the top even after an infinite amount of time. If you possess a tower made of an infinity of rocks, and someone gives you another rock, you are now possessing an infinity of rocks from the tower, plus the rock you just have been given. You possess infinity +1 rocks. If you manage to put this new acquiered rock in the tower, it still takes as much time as before to reach its top by climbing on it: forever. The tower still is made of an infinite amount of rocks. If you possess two towers, each made of an infinite amount of rocks, you possess two infinities of rocks. But if you somehow manage to put the two towers one "on top" of the other, it takes the same amount of time to reach the top of it as it is taking to climb on the former tower. Now let's say you have an infinitely big hole, it's so big that you can put as many rock as you want in it, it will never get full. Even if you put your infinitely big tower in it, there will still be room left for an infinite amount of rocks. Even if you put an infinite amount of infinite towers in it, it won't change that, there is still an infinite amount of room left for rocks. If you share your infinite amount of rocks with poeple around you, you can give them rocks forever, they all get an infinite amount of rocks. Even if you share it with an infinite amount of poeple, you'll never stop giving them rocks. If you build a square floor made of rocks, by putting an infinite straight line of rocks in front of you, and an infinite straight line of rocks aside every rock composing the first line, it's area will be infinity². You can then take the rocks of your floor and build a tower with it, you will never stop building the tower, which means it would create an infinitely big tower. But the tower will take much more space than the floor, because it has an infinite amount of floors which areas are also infinity². • How has this worked in your experience? Feb 18 '19 at 7:46 • Mostly, kid will try to make you add one more rock to the tower making you repeat that it still takes you the exact same "amount of time to reach the top" multiple times, before processing it. Once he gets that, the two towers "on top of each other" goes smoother. Infinite holes, infinite sharing, and infinite dimensions are most of the time near to impossible to understand for young childs, but it works for older ones. For clever ones, you sometimes have to explain the difference between an infinite hole and a hole that has the size of an infinite tower. Feb 19 '19 at 1:50 • About the infinite amount of time climbing not enough to reach the top, you might have to explain that the tower never stops, there is no top of the tower, and it's impossible to reach somewhere that doesn't even exist. Feb 19 '19 at 1:54 • Nice. Adding those details into the answer would improve it. Feb 19 '19 at 6:47 I don't think explaining the $$\infty + 1 = \infty$$ thing directly is very productive. It's meaning is pretty fuzzy, it is very counter-intuitive, and it doesn't give any new perspective by itself. If I wanted to explain how equality works different on infinities, I would rather try to show an example of how 2 infinities can appear both equal and unequal, at the same time. This might be e.g.: • (natural) numbers vs even (natural) numbers. There is obviously the same amount of both, as every even number is just $$2n$$ for some $$n$$. But equally obvious facts is, that there are twice as many natural numbers as even numbers, because there is $$n+1$$ for each of them. • (natural) numbers vs $$n+5$$ for each of them. Similar logic to the one above: there is obvious 1:1 relation between them, but also the first set has clearly 5 elements more. This doesn't really prove anything like $$\infty + 1 = \infty$$, but it conveys a general fact of "equality for infinities is really strange, and definitely different than for numbers". This is similar to the hotel example from a different answer, but I believe it is much better by having clearly separated sets. Instead of changing one set in some complicated way, we have two pretty simple things, lying next to each other, that we can compare in different ways. I'm not sure whether this is a useful answer for 5-year-old. I guess other answers are much better in this regard. However, I myself struggled with these strange infinity eqalities, and examples like above were a key to get me to understand it, so I wanted to share. I believe they are the best way to explain where the "$$\infty + 1$$" paradox comes from. • How has this worked when you have tried? Feb 18 '19 at 7:46 • I haven't really used that on anyone, but it was used on me. I struggled with this ∞+1 thing for quite a while, and seeing a similar example got me to finally accept it. I guess I was about the high school age than, so, as noted, I can't really tell how relevant it is for younger children. – Frax Feb 18 '19 at 21:08 • @TommiBrander I slightly modified the answer to make the intent clearer. I have to admit with some embarassment that it is still only borderline relevant. Yet I think this perpective (i.e. showing 2 sets that are both intuitively equal and unequal in size) was actually missing from other answers, so hopefully it will be useful for someone. – Frax Feb 18 '19 at 21:41 Short answer: I would say, infinity is larger than the largest number anyone can count. • How has this worked in your experience to explain the concept to a young child? Feb 18 '19 at 7:46 • Never got a chance to do that. Certainly I think this is the right way to explain. A curious child would ask a follow up question to this answer, from which you can lead him up to the concept. Feb 18 '19 at 12:17 • Okay. Answers in this SE should be backed up by reliable sources or personal experiences, which explains the low score on your answer. We welcome your input on matters you do have experience with. Feb 18 '19 at 12:54 • @TommiBrander this is a new user, it would be best to help then understand the site and encourage them. Writing nearly the same phrase onto all of these answers isn't really helpful. They don't need to try it to write an answer, but I agree that explaining why they think it would work would improve the answer. Feb 19 '19 at 7:13 • Hi, welcome to Mathmatics.SE! Please check out the tour and help center page. This answer could be improved by explaining why you think this would be a good explanation. Feb 19 '19 at 7:13 Imagine a Steel ball the size of the earth. A fly lands on the surface once every hundred years. When the fly lands his feet wears away the surface of the steel ball by an amount that can only be described as next to nothing. When the whole of the Steel ball has been worn away to nothing, INFINITY has not even started. • How has this worked in your experience to explain the concept to a young child? Feb 18 '19 at 7:47 • Hi, welcome to Mathmatics.SE! Please check out the tour and help center page. This answer could be improved by explaining why you think this would be a good explanation Feb 19 '19 at 7:11
2021-12-07 02:52:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 65, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6224079132080078, "perplexity": 617.3049188096958}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363332.1/warc/CC-MAIN-20211207014802-20211207044802-00172.warc.gz"}
https://math.stackexchange.com/questions/3333178/given-a-hermitian-matrix-a-prove-that-a-ii-is-nonsingular
# Given a Hermitian matrix $A$, prove that $(A-iI)$ is nonsingular The exercise is to prove that, given $$A$$ a Hermitian matrix, then $$(A-iI)$$ is nonsingular. I tried to think about what it meant to be nonsingular, like $$(A-iI)X=0$$ have not only the trivial solution, but was unable to prove it in any way. • Hint: all the eigenvalues of such a matrix are real numbers. Aug 24 '19 at 20:46 Hint: The matrix $$A-i I$$ is singular iff $$i$$ is a singular value of $$A$$. Now, the spectral theorem tells us that all the singular values of a hermitian matrix are... • I think eigenvalue is the correct term. Aug 25 '19 at 16:27 Since $$A = A^\dagger, \tag 1$$ there exists a unitary matrix $$U$$, $$UU^\dagger = U^\dagger U = I, \tag 2$$ such that $$UAU^\dagger = \text{diag}(\lambda_1, \lambda_2, \ldots, \lambda_n), \tag 3$$ where the $$\lambda_i$$ are the eigenvalues of $$A$$; of course (1) implies that $$\lambda_i \in \Bbb R, \; 1 \le i \le n, \tag 4$$ as is well-known. It follows that $$U(A - iI)U^\dagger = UAU^\dagger - i UIU^\dagger = UAU^\dagger - iI$$ $$= \text{diag}(\lambda_1 - i, \lambda_2 - i, \ldots, \lambda_n - i); \tag 5$$ since the $$\lambda_i$$ are real, $$\lambda_i - i \ne 0, \; 1 \le i \le n; \tag 6$$ thus the matrix $$U(A - iI)U^\dagger$$ is non-singular, hence so is $$A - iI = U^\dagger \text{diag}(\lambda_1 - i, \lambda_2 - i, \ldots, \lambda_n - i) U. \tag 7$$ $$OE\Delta.$$
2021-11-28 14:59:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9732528924942017, "perplexity": 142.24583231373347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358560.75/warc/CC-MAIN-20211128134516-20211128164516-00361.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=2015_AMC_12B_Problems/Problem_17&diff=99759&oldid=98584
# Difference between revisions of "2015 AMC 12B Problems/Problem 17" ## Problem An unfair coin lands on heads with a probability of $\tfrac{1}{4}$. When tossed $n>1$ times, the probability of exactly two heads is the same as the probability of exactly three heads. What is the valu of $n$? $\textbf{(A)}\; 5 \qquad\textbf{(B)}\; 8 \qquad\textbf{(C)}\; 10 \qquad\textbf{(D)}\; 11 \qquad\textbf{(E)}\; 13$ ## Solution When tossed $n$ times, the probability of getting exactly 2 heads and the rest tails is $$\dbinom{n}{2} {\left( \frac{1}{4} \right)}^2 {\left( \frac{3}{4} \right) }^{n-2}.$$ Similarly, the probability of getting exactly 3 heads is $$\dbinom{n}{3}{\left( \frac{1}{4} \right)}^3 {\left( \frac{3}{4} \right) }^{n-3}.$$ Now set the two probabilities equal to each other and solve for $n$: $$\dbinom{n}{2}{\left( \frac{1}{4} \right)}^2 {\left( \frac{3}{4} \right) }^{n-2}=\dbinom{n}{3}{\left( \frac{1}{4} \right)}^3 {\left( \frac{3}{4} \right) }^{n-3}$$ $$\frac{n(n-1)}{2!} \cdot \frac{3}{4} = \frac{n(n-1)(n-2)}{3!} \cdot \frac{1}{4}$$ $$3 = \frac{n-2}{3}$$ $$n-2 = 9$$ $$n = \fbox{\textbf{(D)}\; 11}$$ Note: the original problem did not specify $n>1$, so $n=1$ was a solution, but this was fixed in the Wiki problem text so that the answer would make sense. — @adihaya (talk) 15:23, 19 February 2016 (EST) ## Solution 2 Bash it out with the answer choices! (not really a rigorous solution)
2021-03-07 13:23:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8656641840934753, "perplexity": 489.4792169850508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376467.86/warc/CC-MAIN-20210307105633-20210307135633-00138.warc.gz"}
https://dml.cz/handle/10338.dmlcz/134001?show=full
Previous |  Up |  Next # Article Title: The crossing number of the generalized Petersen graph $P[3k,k]$ (English) Author: Fiorini, Stanley Author: Gauci, John Baptist Language: English Journal: Mathematica Bohemica ISSN: 0862-7959 (print) ISSN: 2464-7136 (online) Volume: 128 Issue: 4 Year: 2003 Pages: 337-347 Summary lang: English . Category: math . Summary: Guy and Harary (1967) have shown that, for $k\ge 3$, the graph $P[2k,k]$ is homeomorphic to the Möbius ladder ${M_{2k}}$, so that its crossing number is one; it is well known that $P[2k,2]$ is planar. Exoo, Harary and Kabell (1981) have shown hat the crossing number of $P[2k+1,2]$ is three, for $k\ge 2.$ Fiorini (1986) and Richter and Salazar (2002) have shown that $P[9,3]$ has crossing number two and that $P[3k,3]$ has crossing number $k$, provided $k\ge 4$. We extend this result by showing that $P[3k,k]$ also has crossing number $k$ for all $k\ge 4$. (English) Keyword: graph Keyword: drawing Keyword: crossing number Keyword: generalized Petersen graph Keyword: Cartesian product MSC: 05C10 idZBL: Zbl 1050.05034 idMR: MR2032472 DOI: 10.21136/MB.2003.134001 . Date available: 2009-09-24T22:10:31Z Last updated: 2020-07-29 Stable URL: http://hdl.handle.net/10338.dmlcz/134001 . Reference: [1] Exoo G., Harary F., Kabell J.: The crossing numbers of some generalized Petersen graphs.Math. Scand. 48 (1981), 184–188. MR 0631334, 10.7146/math.scand.a-11910 Reference: [2] Fiorini S.: On the crossing number of generalized Petersen graphs.Ann. Discrete Math. 30 (1986), 225–242. Zbl 0595.05030, MR 0861299 Reference: [3] Guy R. K., Harary F.: On the Möbius ladders.Canad. Math. Bull. 10 (1967), 493–496. MR 0224499, 10.4153/CMB-1967-046-4 Reference: [4] Jendrol’ S., Ščerbová M.: On the crossing numbers of ${S_m}\times {C_n}$.Čas. Pěst. Mat. 107 (1982), 225–230. MR 0673046 Reference: [5] Kuratowski K.: Sur le problème des courbes gauches en topologie.Fund. Math. 15 (1930), 271–283. 10.4064/fm-15-1-271-283 Reference: [6] Richter R. B., Salazar G.: The crossing number of $P(n,3)$.Graphs Combin. 18 (2002), 381–394. MR 1913677, 10.1007/s003730200028 . ## Files Files Size Format View MathBohem_128-2003-4_1.pdf 389.8Kb application/pdf View/Open Partner of
2021-05-14 02:08:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36896252632141113, "perplexity": 6016.131963350154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989616.38/warc/CC-MAIN-20210513234920-20210514024920-00611.warc.gz"}
https://www.core-econ.org/the-economy/book/text/leibniz-08-04-01.html
# Leibniz ## 8.4.1 The firm and market supply curves In our model of a city with many small bakeries and many consumers, each bakery is a price-taker in the market for bread. The supply curve of an individual bakery is determined by its marginal cost curve. The market supply at a given price is the total amount of bread that will be supplied by all the bakeries together. This Leibniz explains how to find the firm and market supply curves mathematically. Suppose there are $m$ bakeries in the city, and the $i^{th}$ bakery has a total cost function $C_i(Q_i)$, where $Q_i$ is the quantity of bread that it produces, for $i = 1,\ …,\ m$. All the bakeries are price-takers. We will first determine the supply functions of the individual bakeries, and then add them together to determine the market supply. ### The supply function of bakery $i$ Bakery $i$ takes the market price, $P$, as given, and chooses its quantity $Q_i$ to maximize its profit, which is given by: Differentiating with respect to $Q_i$ and setting the derivative equal to zero gives us the first-order condition: which can be interpreted as saying that the firm will choose its quantity such that the marginal cost is equal to the market price. For each possible value of $P$ there is a corresponding optimal quantity $Q_i$ satisfying this equation. Since the equation tells us the value of $P$ at which the firm would supply quantity $Q_i$, it can be described as the firm’s inverse supply function. Figure 8.7 of the text, reproduced below as Figure 1, shows the firm’s marginal cost, or equivalently its inverse supply function, $P = C_i'(Q_i)$, in the left-hand panel. The inverse of this function is the direct supply function; it tells us the value $Q_i$ that the firm will choose for a given value of $P$. We will write the firm’s supply function as: For example, suppose firm $i$ has cost function $C_i(Q_i) = 3 Q_i^2+2Q_i$. Then by calculating the marginal cost we find that its inverse supply function is $P = 6 Q_i+2$. Rearranging this equation to find $Q_i$ in terms of $P$ gives us the supply function: $Q_i^S(P) = (P-2)/6$. The firm and market supply curves. Figure 1 The firm and market supply curves. ### The market supply function When the market price is $P,\ Q_1^S (P),\ Q_2^S (P),\ \ldots,\ Q_m^S (P)$ are the individual quantities supplied by the $m$ firms. If the firms all had the same cost functions, they would have identical supply functions; if not, their supply functions will differ. The quantity supplied to the whole market at price $P$ is: The function $Q^S(P)$ is the market supply function. The graph of this function, typically drawn with $P$ on the vertical axis and $Q$ on the horizontal, is the market supply curve. Graphically, the process of going from the supply curves of individual firms to that of the whole market can be viewed as aggregation in the horizontal direction; at any particular price, the individual supplies are added up to give the market supply. In Figure 1, we have drawn the market supply in the right-hand panel, on the assumption that there are 50 bakeries ($m = 50$) with identical supply functions. So at each price, market supply $Q^S(P)$ is 50 times the individual firm supply $Q_i^S(P)$. As we discussed in the text, the market supply curve can be interpreted as the marginal cost curve for the market as a whole. It gives the minimum price at which sellers are willing to supply a given amount of the good. Since each firm chooses a level of output where price equals marginal cost, each firm that produces a positive amount of output must have the same marginal cost. The market supply curve measures the relationship between total output and the common marginal cost of producing this output. The interpretation of the market supply curve as a marginal cost curve is one reason for the standard practice of drawing supply curves with $P$ on the vertical axis. Read more: Section 7.4 of Malcolm Pemberton and Nicholas Rau. 2015. Mathematics for economists: An introductory textbook, 4th ed. Manchester: Manchester University Press.
2018-09-23 11:15:44
{"extraction_info": {"found_math": true, "script_math_tex": 33, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4504322409629822, "perplexity": 428.6771683397443}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159193.49/warc/CC-MAIN-20180923095108-20180923115508-00182.warc.gz"}
https://atarnotes.com/forum/index.php?topic=192053.msg1173099;topicseen
September 24, 2020, 11:04:51 am ### AuthorTopic: spherical geometry help?  (Read 147 times) Tweet Share 0 Members and 1 Guest are viewing this topic. #### parista • Fresh Poster • Posts: 4 • Respect: 0 ##### spherical geometry help? « on: August 11, 2020, 08:36:02 pm » 0 Hi everyone, I'm really having trouble trying to tackle this question and don't know where to start. I tried drawing diagrams but could not figure it out Two locations X and Y have the same latitude 30◦S. The longitude of X is 145◦E and the longitude of Y is 130◦E. d) Find the difference in distance between the distance around the great circle and the distance around the parallel of latitude (whole question attached below) if this helps, i found the answers for a,b,c and got 5542.56 km, 1451km and 6400km respectively. any help would be appreciated, thank you so much #### mathsTeacher82 • Posts: 16 • Respect: +2 ##### Re: spherical geometry help? « Reply #1 on: August 12, 2020, 08:03:39 am » +2 Hi Parista, You need to find the circumference of the small circle with latitude 30 degrees. So you need to find the radius of this small circle first. I have attached a diagram showing the triangle which you can use to find this radius. Let me know if it helps, or if you still have any questions... « Last Edit: August 12, 2020, 08:30:59 am by mathsTeacher82 » #### parista • Fresh Poster • Posts: 4 • Respect: 0 ##### Re: spherical geometry help? « Reply #2 on: August 12, 2020, 09:40:22 am » 0 Hi Parista, You need to find the circumference of the small circle with latitude 30 degrees. So you need to find the radius of this small circle first. I have attached a diagram showing the triangle which you can use to find this radius. Let me know if it helps, or if you still have any questions... hello! yes, I've found out the radius of the small circle in part a) however, i'm not quite sure what i am supposed to do after that? #### mathsTeacher82 • Posts: 16 • Respect: +2 ##### Re: spherical geometry help? « Reply #3 on: August 12, 2020, 10:06:20 am » +1 OK I see. It's asking for the distance between X and Y, either around the great circle or the small circle with 30 degree parallel of latitude... Below is an updated diagram to show the triangle and sector you would need to use. You would use the arc length formula with the radius OX = 6400, but the tricky part is finding the angle <XOY. « Last Edit: August 12, 2020, 04:14:37 pm by mathsTeacher82 » #### parista • Fresh Poster • Posts: 4 • Respect: 0 ##### Re: spherical geometry help? « Reply #4 on: August 12, 2020, 10:13:14 am » 0 OK I see. The distance around the great circle is $C_1=2\times \pi \times 6400$ The distance around the 30 degree parallel of latitude is $C_2=2\times \pi \times r$ , where r is the radius of the small circle. Then subtract to find the difference... ahh..okay i see what you mean! however, this was the sample solution (photo attached) which is very convoluted:-\ #### mathsTeacher82 • Posts: 16 • Respect: +2 ##### Re: spherical geometry help? « Reply #5 on: August 12, 2020, 04:07:34 pm » +2 Yes the answer you attached is correct, although there are a few typos in the working which I have corrected (in red). But to be honest this question should not appear on your exam. Here is the relevant dot point from the VCAA study design: "use of a great circle to determine the shortest distance between two points on the surface of the earth that have the same longitude" If the points are on the same longitude, the required angle in the arc length formula is just the difference in longitudes, and you would not have to go through the "convoluted" process in this problem. « Last Edit: August 12, 2020, 04:09:14 pm by mathsTeacher82 » #### parista • Fresh Poster • Posts: 4 • Respect: 0 ##### Re: spherical geometry help? « Reply #6 on: August 12, 2020, 04:12:37 pm » 0 Yes the answer you attached is correct, although there are a few typos in the working which I have corrected (in red). But to be honest this question should not appear on your exam. Here is the relevant dot point from the VCAA study design: "use of a great circle to determine the shortest distance between two points on the surface of the earth that have the same longitude"
2020-09-24 01:04:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 2, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7081088423728943, "perplexity": 1338.035694580033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400213006.47/warc/CC-MAIN-20200924002749-20200924032749-00062.warc.gz"}
https://webapps.stackexchange.com/questions/65390/sum-the-product-of-two-columns-in-google-spreadsheets
Here's the situation: ITEM COST CUST1 CUST2 CUST3 TQTY foo $0.5 1 0 0.5 1.5 baz$1.0 2 1 0 3 bar $1.5 0.5 0 0.3 0.8 SUBT$2.75 $1.00$0.75 \$4.50 Simple, right? The SUBT row should have the contents of CUST1*COST, CUST2*COST, CUST3*COST for each row. And the TQTY column has the sum of CUST1+CUST2+CUST3 for each row. At least it SHOULD be simple, but auto-fill keeps screwing me. TQTY is easy, of course, but I can't for the life of me figure out how to use a formula to give me the sum of the product of two columns cell by cell. In particular, I need to do this in a way that will be user-manageable for someone wanting to insert rows or columns in the middle and have it continue to "just work". • Hi Jim, can't reproduce your figures. Shouldn't you be using SUMPRODUCT for COST and CUST1 to have the total per customer? – Jacob Jan Tuinstra Jul 15 '14 at 8:05 This will sum the quantities, per row, for all rows in the range. ## Formula =ARRAYFORMULA(SUMIF(IF(COLUMN(C2:E4),ROW(C2:E4)),ROW(C2:E4),C2:E4)) ## Example I've created an example file for you: Sum over rows ## Reference https://stackoverflow.com/a/21804838/1536038 • Thank you so much - SUMPRODUCT is exactly what I was looking for (TQTY wasn't the sticking part, I can get that just by summing b2:e2, etc). – Jim Salter Jul 16 '14 at 0:59 The right function to use is SumProduct Formula =SUMPRODUCT(array1, [array2, ...]) Screenshot
2019-10-16 17:11:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7049364447593689, "perplexity": 2236.0724557176964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669057.0/warc/CC-MAIN-20191016163146-20191016190646-00239.warc.gz"}
https://www.physicsforums.com/threads/electric-field-from-a-non-uniformly-charged-disk.894608/
# Electric field from a non-uniformly charged disk Tags: 1. Nov 24, 2016 ### mshahi 1. The problem statement, all variables and given/known data We are given a disk with negligible thickness, a radius of 1m, and a surface charge density of σ(x,y) = 1 + cos(π√x2+y2). The disk is centered at the origin of the xy plane. We are also given the location of a point charge in Cartesian coordinates, for example [0.5,0.5,2]. We need to find the electric field components (x,y,z) at the location of the point charge. 2. Relevant equations d$\vec E$ = σdS/4πε0 ⋅ $\hat r$/r2 3. The attempt at a solution My first thought was to make dS a thin ring centre origin. this would give dS=2πr'dr' where r' is the radius from the origin to the ring. I then thought the I could write r2 as r'2+z2, and r' would also be equal to √x2+y2. Plugging this and σ into the integral just to find the magnitude of $\vec E$ we get: ∫ (1+cos(πr'))r'/(r'2+z2) dr' However I have realised that this is wrong because a) the answer it gives is far to small to be reasonable, b) I think this calculation assumes a spherical field which it clearly isn't, and c) the relationship to find r in terms of r' and z only holds if the point charge is over the disk, which it isn't necessarily. I think I am meant to split the integral into x, y, and z components but am unsure of whether this is the correct approach. Now i am completely stuck, none of the notes I can find explain this, I even took out a book from the 1950s on electrostatics to try and find the way to solve this and I just cannot for the life of me find it anywhere! Any help would be very much appreciated! 2. Nov 24, 2016 ### BvU Hello mshahi, Are you aware that the $\vec r$ in $d\vec E$ is a different one from the $r$ in $dS$ ? How ? Writing $r^2$ as $r'\,^2 + z^2$ makes $x$ and $y$ disappear. Or are you only interested in the field on the z axis ? Make a drawing to oversee the situation and set up an expression for the integration.
2018-03-22 18:42:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7900536060333252, "perplexity": 265.5884912609379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647901.79/warc/CC-MAIN-20180322170754-20180322190754-00760.warc.gz"}
https://euler.stephan-brumme.com/148/
<< problem 147 - Rectangles in cross-hatched grids Searching for a maximum-sum subsequence - problem 149 >> # Problem 148: Exploring Pascal's triangle We can easily verify that none of the entries in the first seven rows of Pascal's triangle are divisible by 7: 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 1 6 15 20 15 6 1 However, if we check the first one hundred rows, we will find that only 2361 of the 5050 entries are not divisible by 7. Find the number of entries which are not divisible by 7 in the first one billion (109) rows of Pascal's triangle. # My Algorithm I needed a few attempts to finally solve this problem. My first idea was to iteratively create the triangle's rows, one at a time. To find the binomial coefficient C(n+1,k+1) you only need the previous row: C(n+1,k+1) = C(n,k) + C(n,k+1) To avoid overflows, each value should be the original value modulo 7. That works because (a+b) mod 7 = ((a mod 7) + (b mod 7)) mod 7 (that's no special property of 7, you can pick any integer) Unfortunately I knew right from the start that this approach would be slow ... and memory-consuming (roughly 2 GByte). Nevertheless I used the code (see nextRow) to show me the results for the first 1000 rows. And then a pattern appeared (my rows start at index 0): rowfound 01 12 23 34 45 56 67 72 84 96 108 1314 143 156 If I convert the row from decimal system to a number in base 7 then found(row) = (digit_1(row) + 1) * (digit_2(row) + 1) * ... (digit_n(row) + 1) For example: found(15_10) = found(111_7) = (1+1) * (1+1) * (1+1) = 6 My function countNonDivisible does exactly that: it converts its parameter to base 7 and multiplies all digits plus one. It takes 34 seconds until the correct result is displayed on my computer. The inner-most loop of countNonDivisible consists of modulo and division operations - they are extremely slow in comparision to addition, subtractio, multiplication. And actually there is no need to perform these divisions: I process all numbers consecutively in ascending order. Therefore my final algorithm stores all digits of row in base 7. To speed up the program, not the true digits but the digits plus one are stored because when multiplying all digits I have to add one. That means that my array base7 has up to 12 digits (7^12 > 10^9), each from 1 to 7. Incrementing by one can cause some digits to become 7+1=8 → reset them to 1 and carry over 1. That algorithm needs about 3.5 seconds. ## Alternative Approaches My final algorithm is still a brute-force algorithm. You can find a closed formula, too: The sum of the first 7^1 = 7 rows is 28. The sum of the first 7^2 = 49 rows is 28^2 = 784 ... and so on. ## Note A substantial reason for the performance gain of my third algorithm is that base7 fits into the CPU cache and/or CPU registers (it's just 12 bytes). After changing the data type from std::vector<unsigned char> to std::vector<unsigned int> the execution time explodes from 3.5 to 28 seconds (and unsigned short → 3.9 seconds). # Interactive test You can submit your own input to my program and it will be instantly processed at my server: Input data (separated by spaces or newlines): This is equivalent to echo 100 | ./148 Output: (please click 'Go !') Note: the original problem's input 1000000000 cannot be entered because just copying results is a soft skill reserved for idiots. (this interactive test is still under development, computations will be aborted after one second) # My code … was written in C++11 and can be compiled with G++, Clang++, Visual C++. You can download it, too. #include <iostream> #include <vector> typedef std::vector<unsigned char> Row; const unsigned int Modulo = 7; // generate next row of Pascal's triangle modulo a number (> 1) // return count of elements that are not a multiple of modulo (in C++ speak: x % modulo != 0) unsigned long long nextRow(Row& row) { // last value is always 1 row.push_back(1); if (row.size() == 1) return 1; // first and last value are never a multiple of 7 unsigned long long result = 2; // C(n+1,k+1) = C(n,k) + C(n,k+1) for (size_t k = row.size() - 2; k > 0; k--) { // note: I'm processing the row back-to-front // therefore minus 1 instead of plus 1 unsigned char current = row[k] + row[k - 1]; // subtraction is faster than modulo: current %= modulo // all values must be 0 ... 2*(modulo-1) if (current >= Modulo) current -= Modulo; // not divisible ? if (current != 0) result++; row[k] = current; } return result; } // convert to base 7 and multiply all digits plus 1 unsigned long long countNonDivisible(unsigned int row) { unsigned long long result = 1; while (row > 0) { // one more digit ... result *= (row % Modulo) + 1; row /= Modulo; } return result; } int main() { unsigned int numRows = 1000000000; std::cin >> numRows; // for simple algorithm based on nextRow() Row current = { 1 }; // for my fastest pseudo brute-force algorithm std::vector<unsigned char> base7(12, 1); // 7^12 > 10^9 unsigned long long count = 1; for (unsigned int row = 1; row < numRows; row++) { // simple algorithm (basically takes forever and needs tons of memory) //auto found = nextRow(current); //std::cout << row << " " << found << std::endl; // slightly more advanced //auto found = countNonDivisible(row); // and my fastest (still pseudo brute-force) algorithm: // keep all digits of row in base 7 in an array base7 with a twist: // each digit is one higher than it should be // => because previously I had to add 1 before multiplying // next number base7[0]++; // carry over to next digits auto carryPos = 0; while (base7[carryPos] == Modulo + 1) { base7[carryPos] = 1; // remember: start at 1 instead of 0 base7[carryPos + 1]++; carryPos++; } // multiply all digits unsigned long long found = 1; for (auto& x : base7) found *= x; // keep track of the sum of all rows count += found; } std::cout << count << std::endl; return 0; } This solution contains 19 empty lines, 27 comments and 2 preprocessor commands. # Benchmark The correct solution to the original Project Euler problem was found in 3.5 seconds on an Intel® Core™ i7-2600K CPU @ 3.40GHz. (compiled for x86_64 / Linux, GCC flags: -O3 -march=native -fno-exceptions -fno-rtti -std=gnu++11 -DORIGINAL) See here for a comparison of all solutions. Note: interactive tests run on a weaker (=slower) computer. Some interactive tests are compiled without -DORIGINAL. # Changelog July 12, 2017 submitted solution July 12, 2017 added comments # Difficulty Project Euler ranks this problem at 50% (out of 100%). # Heatmap Please click on a problem's number to open my solution to that problem: green solutions solve the original Project Euler problem and have a perfect score of 100% at Hackerrank, too yellow solutions score less than 100% at Hackerrank (but still solve the original problem easily) gray problems are already solved but I haven't published my solution yet blue solutions are relevant for Project Euler only: there wasn't a Hackerrank version of it (at the time I solved it) or it differed too much orange problems are solved but exceed the time limit of one minute or the memory limit of 256 MByte red problems are not solved yet but I wrote a simulation to approximate the result or verified at least the given example - usually I sketched a few ideas, too black problems are solved but access to the solution is blocked for a few days until the next problem is published [new] the flashing problem is the one I solved most recently 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 The 310 solved problems (that's level 12) had an average difficulty of 32.6% at Project Euler and I scored 13526 points (out of 15700 possible points, top rank was 17 out of ≈60000 in August 2017) at Hackerrank's Project Euler+. My username at Project Euler is stephanbrumme while it's stbrumme at Hackerrank. Look at my progress and performance pages to get more details. << problem 147 - Rectangles in cross-hatched grids Searching for a maximum-sum subsequence - problem 149 >> more about me can be found on my homepage, especially in my coding blog. some names mentioned on this site may be trademarks of their respective owners. thanks to the KaTeX team for their great typesetting library !
2018-08-18 22:42:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34443581104278564, "perplexity": 255.69677726228474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213794.40/warc/CC-MAIN-20180818213032-20180818233032-00605.warc.gz"}
https://www.vedantu.com/maths/distance-between-two-points-3d
# Distance Between Two Points 3D Distance refers to a mathematical quantity that shows how far the two points lie from each other. Indeed, distance is one of the essential mathematical quantities. It plays a significant role in advanced mathematics and physics. It helps to determine the velocity of a moving object, magnitude and direction of gravitational and electrical forces, and it helps with signal processing too. In mathematics, distance formula is used for finding the distance between two points in a coordinate plane. The distance between the two points formula can be evaluated when you know the coordinates of the two points in a plane. By inserting those points in the formula, you can quickly find the distance between two points. In this article, you can learn about the distance between two points in 3D, its formula, and examples. ### Distance Between Two Points Formula Typically, in 2D space, each point in the space gets qualified by two parameters: x-coordinate and y-coordinate. You require a pair of coordinate axis to locate the exact position of a point in a plan. The combination of x and y coordinates gets expressed in the form of an ordered pair such as, (x, y). So, the coordinates of a point, say M, can get expressed as, M (x, y). That ordered pair (x, y) gives you the coordinate of the point. Before you learn to find the distance between two points 3D, you must know the basic distance formula, which is as below. Considering two points M (x1, y1) and N (x2, y2) on the given coordinate axis, you can find the distance between them using the formula: Steps to find the distance between two points: • First, you need to take coordinates of two points like (x1, y1) and (x2, y2). • Then, you have to use the distance formula, which is √ [(x2 – x1)² + (y2 – y1)²]. • Now, you have to calculate the vertical and horizontal distance between the two points. The horizontal distance (x2 – x1) represents the points on the x-axis, and vertical distance (y2 – y1) denotes the points on y-axis. • Next, you have to square both the values obtained from (x2 – x1) and (y2 – y1). • Now, all you need to do is add both the values, which looks like, (x2 – x1)2 + (y2 – y1)2. • Finally, you need to find the square root of the obtained value. • The value you get in the end is the distance between two points in the coordinate plane. ### Distance Between Two Points in 3D The following study can get extended to find out the distance between two points in space. We can determine the distance between two points in 3D using a formula as derived below. For now, refer to the fig. 1. Here, points P (x1, y1, z1) and Q (x2, y2, z2) refer to a system of rectangular axes OX, OY, and OZ. From the points P and Q, you need to draw planes parallel to the coordinate plane. Then, you get a rectangular parallelepiped with PQ as the diagonal. As you can see in the figure, ∠PAQ is forming a right angle. It enables us to apply the Pythagoras theorem in triangle PAQ. So, now you get PQ2 = PA2 + AQ2 . . . . (I) Also note that, in triangle ANQ, ∠ANQ is a right angle. Now, you need to apply Pythagoras theorem to ΔANQ as well. Now, you obtain AQ2=AN2+NQ2 . . . . (II) From equation (I) and equation (II), you get PQ2=PA2+NQ2+AN2. As you know the coordinates of the points P and Q, PA = y2− y1, AN = x2− x1 and NQ = z2−z1. Hence, $PQ^{2} = (x_{2} - x_{1})^{2} + (y_{2} - y_{1})^{2} + (z_{2} - z_{1})^{2}$. Finally, the formula to obtain the distance between two points in 3D is – $PQ = \sqrt{(x_{2} - x_{1})^{2} + (y_{2} - y_{1})^{2} + (z_{2} - z_{1})^{2}}$ That formula can give you the distance between two points P (x1, y1, z1) and Q (x2, y2, z2) in 3D. Also note that the distance of any point Q (x, y, z) in space from origin O (0, 0, 0), can get expressed as, $OQ = \sqrt{x^{2} + y^{2} + z^{2}}$. ### Solved Examples Question 1: Find the distance between two points given by A (6, 4, -3) and B (2, -8, 3). Answer: Here, we need to use the distance formula to find the distance between points A and B. You have, $AB = \sqrt{(x_{2} - x_{1})^{2} + (y_{2} - y_{1})^{2} + (z_{2} - z_{1})^{2}}$ $AB = \sqrt{(6 - 2)^{2} + (4 - (-8)^{2} + (-3 -3)^{2}}$ $AB = \sqrt{16 + 144 + 36}$ Finally, AB = 14; so the distance between points A and B is 14. Question 1. Explain Three Dimensions and 3D Coordinate System. Answer: Typically, the space dimensions get expressed as x-y-z, and they represent the width, length, and height. Three-dimensional shapes refer to shapes such as cone, sphere, prism, cylinder, cube, and rectangle, etc. All these shapes occupy space, and they have a certain volume too. Further, the 3D coordinate system refers to a Cartesian coordinate system; it relies on the point called an origin. It comprises three mutually perpendicular vectors that define three coordinate axes, namely x, y, and z. You can call them as abscissa, ordinate, and applicate axis, in a respective manner. Question 2. Can the Distance Between Two Points be Negative? Answer: No, you cannot have the distance between two points as a negative integer. Here are three reasons why distance cannot be negative. • Distance represents how far the two points are from each other. It’s a physical quantity, and it cannot be negative. • From the distance formula, it’s an outcome of the square root of the addition of two positive numbers. Keep in mind that the addition of two positive numbers is positive and their square root must be positive too. • Even if the distance between two points is zero; it is still a non-negative integer. And that’s why the distance between two points can never be negative.
2020-07-09 08:59:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8300489783287048, "perplexity": 287.85266115793416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899209.48/warc/CC-MAIN-20200709065456-20200709095456-00441.warc.gz"}
http://crypto.stackexchange.com/tags/compression/hot
# Tag Info 9 Well, your definition of entropy is known as Kolmogorov complexity, and it's not so much that it is incorrect, as it is that it is inapplicable to what gzip does. For example, the value $\pi$ can also be generated by a short program; however, if you attempt to compress a 2.2Mbyte sample of the binary expansion, you'll also find that gzip will also not be ... 8 There is at least one way in which compression can weaken security; it has to do with the fact that essentially all methods of encrypting arbitrarily long message will inevitably leak information about the length of the input. The only way to avoid this leak is to pad all messages to a constant length before encrypting them — but if the messages are ... 8 Technically, if you use a cryptographically secure encryption algorithm with a fresh random key in a confidentiality mode such as (full block) CFB, you don't have to worry about the redundancy of the plain text, since the cipher + mode combination is supposed to be secure even if significant parts of the plain text are known to the adversary. If the cipher ... 6 According to 7-Zip, Use ZipCrypto, if you want to get archive compatible with most of the ZIP archivers. AES-256 provides stronger encryption, but now AES-256 is supported only by 7-Zip, WinZip and some other ZIP archivers. So really there is some balance to be played with. Do you require better security at the sacrifice of compatibility or more ... 4 Daniel J. Bernstein mentioned your way of compressing RSA public keys in his paper "A secure public-key signature system with extremely fast verification". The naive way you outline roughly doubles the work for each extra bit. If there were a better method which did not run very slowly then it could be repurposed as a factoring algorithm. So if it were ... 4 Yes, a ciphertext of a bulk encryption algorithm normally should not be compressible to less than the plaintext size¹ (at least, if the compression function does not know the encryption key), other than in some corner cases which will occur only with negligible probability (like you hitting the one plaintext which will encrypt to the all-zero-string). ¹Of ... 3 Well, the data structure of a compressed data is whatever the decompression algorithm needs to be able to reconstruct the original data (assuming a lossless compression method; it's an approximation of the original data if we're talking about a lossy compression method). That might not be the answer you're looking for; you might be looking for details on ... 3 Also any twin-encryption algo-s around?: by which I mean, suppose I have 2 data strings (alphanumeric only, say for now) -- Using them both, and an algo, I produce the encrypted output - I take in a pair, and produce a pair. The procedure is algo-based and not key-based. One fundamental fact (or perhaps I should say "assumption") in cryptography is that ... 3 Unlike some crypto tasks like encryption+authentication combining compression+encryption have nothing in common/non synergies, so combining them into one algorithm offers no advantages. In practice this means you first compress your data, and then encrypt it, because encrypted data is uncompressable. That way you cleanly separated the separate concerns, and ... 2 For cryptographic hash functions we usually want to avoid collisions as much as possible (and even more we want to avoid any way to get from the output back to the preimage). So what you want certainly is not a cryptographic hash function, but something else. On the first look, something like a CRC (cyclic redundancy check) could fit your bill. These have ... 2 Actually, it appears that we can do a bit better by using an unbalanced RSA key; that is, one composed of two primes of different sizes. For example, suppose we have a 512 bit p and a 1536 bit q; to generate a key, we can select a random 512 bit prime p, and then for q, we search for a prime in the range $(C/p, (C+2^k)/p)$ (where $C$ is our 2048 bit ... 2 Compressing the data increases the security a number of ways. It reduces an attacker's ability to affect the decrypted output by flipping ciphertext bits. It removes regular patterns in plaintext (it might create other regular patterns, but they aren't directly the plaintext). There are a number of attacks on OpenPGP that are thwarted by compression. Most ... 2 If you have known plaintext, namely one input file that is known in its entirety, this is trivial to break. So I'll explore methods that might lead to a break, if you don't know what's in the input file that was compressed. I suggest that you start by analyzing the DEFLATE stream format carefully (see also these handy notes). This will probably help you ... 1 Selective format-compliant JPEG encryption as you are trying to do it is a great idea, but it won't work... not like this. To keep the reasons short and simple: JPEG uses lossy compression (and even lossier recompression). If you really want to create a format-compliant implementation, you'll have to take care that you're independent of any ... 1 It won't compress because data that is encrypted with AES becomes pseudo-random-like and thus as close to maximum entropy as possible. As you pointed out, the clear text input is low entropy. Additionally, entropy can be used as a way to detect clear text (given the clear text isn't pseudo-random itself). The output entropy from failed AES decrypts ... 1 Correct me if I'm wrong, but isn't compression working because there exists some pattern in the data? Such a pattern will not exists after encryption since, ideally, the output "looks random". So, if both encryption and compression is wanted, they have to be done in the order described in the book? Only top voted, non community-wiki answers of a minimum length are eligible
2013-06-19 18:17:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6512870192527771, "perplexity": 847.1554921348962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709006458/warc/CC-MAIN-20130516125646-00083-ip-10-60-113-184.ec2.internal.warc.gz"}
https://chem.libretexts.org/Bookshelves/General_Chemistry/Book%3A_Chemistry_(Averill_and_Eldredge)/24%3A_Organic_Compounds/24.5_Common_Classes_of_Organic_Compounds
# 24.5 Common Classes of Organic Compounds Learning Objectives • To understand the general properties of functional groups and the differences in their reactivity. The general properties and reactivity of each class of organic compounds is largely determined by its functional groups. In this section, we describe the relationships between structure, physical properties, and reactivity for the major classes of organic compounds. We also show you how to apply these relationships to understand some common reactions that chemists use to synthesize organic compounds. ## Alkanes, Alkenes, and Alkynes The boiling points of alkanes increase smoothly with increasing molecular mass. They are similar to those of the corresponding alkenes and alkynes because of similarities in molecular mass between analogous structures (Table $$\PageIndex{1}$$). In contrast, the melting points of alkanes, alkenes, and alkynes with similar molecular masses show a much wider variation because the melting point strongly depends on how the molecules stack in the solid state. It is therefore sensitive to relatively small differences in structure, such as the location of a double bond and whether the molecule is cis or trans. Table $$\PageIndex{1}$$: Boiling Points (in °C) of Alkanes, Alkenes, and Alkynes of Comparable Molecular Mass Length of Carbon Chain Class Two C Atoms Three C Atoms Four C Atoms alkane −88.6 −42.1 −0.5 alkene −103.8 −47.7 −6.3 alkyne −84.7 −23.2 8.1 Because alkanes contain only C–C and C–H bonds, which are strong and not very polar (the electronegativities of C and H are similar), they are not easily attacked by nucleophiles or electrophiles. Consequently, their reactivity is limited, and often their reactions occur only under extreme conditions. For example, catalytic cracking can be used to convert straight-chain alkanes to highly branched alkanes, which are better fuels for internal combustion engines. Catalytic cracking is one example of a pyrolysis reaction (from the Greek pyros, meaning “fire,” and lysis, meaning “loosening”), in which alkanes are heated to a sufficiently high temperature to induce cleavage of the weakest bonds: the C–C single bonds. The result is a mixture of radicals derived from essentially random cleavage of the various C–C bonds in the chain. Pyrolysis of n-pentane, for example, is nonspecific and can produce these four radicals: $\mathrm{2CH_3CH_2CH_2CH_2CH_3\xrightarrow{\Delta}CH_3\cdot+ CH_2CH_2CH_2\cdot+ CH_3CH_2\cdot+ CH_3CH_2CH_2\cdot} \tag{24.5.1}$ Recombination of these radicals (a termination step) can produce ethane, propane, butane, n-pentane, n-hexane, n-heptane, and n-octane. Radicals that are formed in the middle of a chain by cleaving a C–H bond tend to produce branched hydrocarbons. In catalytic cracking, lighter alkanes are removed from the mixture by distillation. Radicals are also produced during the combustion of alkanes, with CO2 and H2O as the final products. Radicals are stabilized by the presence of multiple carbon substituents that can donate electron density to the electron-deficient carbon. The chemical explanation of octane ratings rests partly on the stability of radicals produced from the different hydrocarbon fuels. Recall that n-heptane, which does not burn smoothly, has an octane rating of 0, and 2,2,4-trimethylpentane (“isooctane”), which burns quite smoothly, has a rating of 100. Isooctane has a branched structure and is capable of forming tertiary radicals that are comparatively stable. In contrast, the radicals formed during the combustion of n-heptane, whether primary or secondary, are less stable and hence more reactive, which partly explains why burning n-heptane causes premature ignition and engine knocking. In Section 24.2, we explained that rotation about the carbon–carbon multiple bonds of alkenes and alkynes cannot occur without breaking a π bond, which therefore constitutes a large energy barrier to rotation (Figure $$\PageIndex{1}$$). Consequently, the cis and trans isomers of alkenes generally behave as distinct compounds with different chemical and physical properties. A four-carbon alkene has four possible isomeric forms: three structural isomers, which differ in their connectivity, plus a pair of geometric isomers from one structural isomer (2-butene). These two geometric isomers are cis-2-butene and trans-2-butene. The four isomers have significantly different physical properties. Figure $$\PageIndex{1}$$: Carbon–Carbon Bonding in Alkenes and Interconversion of Cis and Trans Isomers In butane, there is only a small energy barrier to rotation about the C2–C3 σ bond. In the formation of cis- or trans-2-butene from butane, the p orbitals on C2 and C3 overlap to form a π bond. To convert cis-2-butene to trans-2-butene or vice versa through rotation about the double bond, the π bond must be broken. Because this interconversion is energetically unfavorable, cis and trans isomers are distinct compounds that generally have different physical and chemical properties. Alkynes in which the triple bond is located at one end of a carbon chain are called terminal alkynes and contain a hydrogen atom attached directly to a triply bonded carbon: R–C≡C–H. Terminal alkynes are unusual in that the hydrogen atom can be removed relatively easily as H+, forming an acetylide ion (R–C≡C). Acetylide ions are potent nucleophiles that are especially useful reactants for making longer carbon chains by a nucleophilic substitution reaction. As in earlier examples of such reactions, the nucleophile attacks the partially positively charged atom in a polar bond, which in the following reaction is the carbon of the Br–C bond: Alkenes and alkynes are most often prepared by elimination reactions. A typical example is the preparation of 2-methyl-1-propene, whose derivative, 3-chloro-2-methyl-1-propene, is used as a fumigant and insecticide. The parent compound can be prepared from either 2-hydroxy-2-methylpropane or 2-bromo-2-methylpropane: The reaction on the left proceeds by eliminating the elements of water (H+ plus OH), so it is a dehydration reaction. If an alkane contains two properly located functional groups, such as –OH or –X, both of them may be removed as H2O or HX with the formation of a carbon–carbon triple bond: Alkenes and alkynes are most often prepared by elimination reactions. ## Arenes Most arenes that contain a single six-membered ring are volatile liquids, such as benzene and the xylenes, although some arenes with substituents on the ring are solids at room temperature. In the gas phase, the dipole moment of benzene is zero, but the presence of electronegative or electropositive substituents can result in a net dipole moment that increases intermolecular attractive forces and raises the melting and boiling points. For example, 1,4-dichlorobenzene, a compound used as an alternative to naphthalene in the production of mothballs, has a melting point of 52.7°C, which is considerably greater than the melting point of benzene (5.5°C). Certain aromatic hydrocarbons, such as benzene and benz[a]pyrene, are potent liver toxins and carcinogens. In 1775, a British physician, Percival Pott, described the high incidence of cancer of the scrotum among small boys used as chimney sweeps and attributed it to their exposure to soot. His conclusions were correct: benz[a]pyrene, a component of chimney soot, charcoal-grilled meats, and cigarette smoke, was the first chemical carcinogen to be identified. Although arenes are usually drawn with three C=C bonds, benzene is about 150 kJ/mol more stable than would be expected if it contained three double bonds. This increased stability is due to the delocalization of the π electron density over all the atoms of the ring. Compared with alkenes, arenes are poor nucleophiles. Consequently, they do not undergo addition reactions like alkenes; instead, they undergo a variety of electrophilic aromatic substitution reactions that involve the replacement of –H on the arene by a group –E, such as –NO2, –SO3H, a halogen, or an alkyl group, in a two-step process. The first step involves addition of the electrophile (E) to the π system of benzene, forming a carbocation. In the second step, a proton is lost from the adjacent carbon on the ring: The carbocation formed in the first step is stabilized by resonance. Arenes undergo substitution reactions rather than elimination because of increased stability arising from delocalization of their π electron density. Many substituted arenes have potent biological activity. Some examples include common drugs and antibiotics such as aspirin and ibuprofen, illicit drugs such as amphetamines and peyote, the amino acid phenylalanine, and hormones such as adrenaline (Figure $$\PageIndex{2}$$). Aspirin (antifever activity), ibuprofen (antifever and anti-inflammatory activity), and amphetamine (stimulant) have pharmacological effects. Phenylalanine is an amino acid. Adrenaline is a hormone that elicits the “fight or flight” response to stress. Chiral centers are indicated with an asterisk. ## Alcohols and Ethers Both alcohols and ethers can be thought of as derivatives of water in which at least one hydrogen atom has been replaced by an organic group, as shown here. Because of the electronegative oxygen atom, the individual O–H bond dipoles in alcohols cannot cancel one another, resulting in a substantial dipole moment that allows alcohols to form hydrogen bonds. Alcohols therefore have significantly higher boiling points than alkanes or alkenes of comparable molecular mass, whereas ethers, without a polar O–H bond, have intermediate boiling points due to the presence of a small dipole moment (Table $$\PageIndex{2}$$). The larger the alkyl group in the molecule, however, the more “alkane-like” the alcohol is in its properties. Because of their polar nature, alcohols and ethers tend to be good solvents for a wide range of organic compounds. Table $$\PageIndex{2}$$: Boiling Points of Alkanes, Ethers, and Alcohols of Comparable Molecular Mass Name Formula Molecular Mass (amu) Boiling Point (°C) alkane propane C3H8 44 −42.1 n-pentane C5H12 72 36.1 n-heptane C7H16 100 98.4 ether dimethylether (CH3)2O 46 −24.5.1 diethylether (CH3CH2)2O 74 34.5 di-n-propylether (CH3CH2CH2)2O 102 90.1 alcohol ethanol CH3CH2OH 46 78.3 n-butanol CH3(CH2)3OH 74 117.7 n-hexanol CH3(CH2)5OH 102 157.6 Alcohols are usually prepared by adding water across a carbon–carbon double bond or by a nucleophilic substitution reaction of an alkyl halide using hydroxide, a potent nucleophile (Figure $$\PageIndex{1}$$). Alcohols can also be prepared by reducing compounds that contain the carbonyl functional group (C=O; part (a) in Figure 24.5.7). Alcohols are classified as primary, secondary, or tertiary, depending on whether the –OH group is bonded to a primary, secondary, or tertiary carbon. For example, the compound 5-methyl-3-hexanol is a secondary alcohol. Ethers, especially those with two different alkyl groups (ROR′), can be prepared by a substitution reaction in which a nucleophilic alkoxide ion (RO) attacks the partially positively charged carbon atom of the polar C–X bond of an alkyl halide (R′X): Although both alcohols and phenols have an –OH functional group, phenols are 106–108 more acidic than alcohols. This is largely because simple alcohols have the –OH unit attached to an sp3 hybridized carbon, whereas phenols have an sp2 hybridized carbon atom bonded to the oxygen atom. The negative charge of the phenoxide ion can therefore interact with the π electrons in the ring, thereby delocalizing and stabilizing the negative charge through resonance. In contrast, the negative charge on an alkoxide ion cannot be stabilized by these types of interactions. Alcohols undergo two major types of reactions: those involving cleavage of the O–H bond and those involving cleavage of the C–O bond. Cleavage of an O–H bond is a reaction characteristic of an acid, but alcohols are even weaker acids than water. The acidic strength of phenols, however, is about a million times greater than that of ethanol, making the pKa of phenol comparable to that of the NH4+ ion (9.89 versus 9.25, respectively): $C_6H_5OH + H_2O \rightleftharpoons H_3O^+ + C_6H_5O^− \tag{24.5.1}$ Alcohols undergo two major types of reactions: cleavage of the O–H bond and cleavage of the C–O bond. Cleavage of the C–O bond in alcohols occurs under acidic conditions. The –OH is first protonated, and nucleophilic substitution follows: In the absence of a nucleophile, however, elimination can occur, producing an alkene (Figure 24.5.6). Ethers lack the –OH unit that is central to the reactivity of alcohols, so they are comparatively unreactive. Their low reactivity makes them highly suitable as solvents for carrying out organic reactions. ## Aldehydes and Ketones Aromatic aldehydes, which have intense and characteristic flavors and aromas, are the major components of such well-known flavorings as vanilla and cinnamon (Figure 24.5.3). Many ketones, such as camphor and jasmine, also have intense aromas. Ketones are found in many of the hormones responsible for sex differentiation in humans, such as progesterone and testosterone. In compounds containing a carbonyl group, nucleophilic attack can occur at the carbon atom of the carbonyl, whereas electrophilic attack occurs at oxygen. Aldehydes and ketones contain the carbonyl functional group, which has an appreciable dipole moment because of the polar C=O bond. The presence of the carbonyl group results in strong intermolecular interactions that cause aldehydes and ketones to have higher boiling points than alkanes or alkenes of comparable molecular mass (Table 24.5.3). As the mass of the molecule increases, the carbonyl group becomes less important to the overall properties of the compound, and the boiling points approach those of the corresponding alkanes. Table $$\PageIndex{2}$$: Boiling Points of Alkanes, Aldehydes, and Ketones of Comparable Molecular Mass Name Formula Molecular Mass (amu) Boiling Point (°C) alkane n-butane C4H10 58 −0.5 n-pentane C5H12 72 36.1 aldehyde propionaldehyde (propanal) C3H6O 58 48.0 butyraldehyde (butanal) C4H8O 72 74.8 ketone acetone (2-propanone) C3H6O 58 56.1 methyl ethyl ketone (2-butanone) C4H8O 72 79.6 Aldehydes and ketones are typically prepared by oxidizing alcohols (part (a) in Figure 24.5.7). In their reactions, the partially positively charged carbon atom of the carbonyl group is an electrophile that is subject to nucleophilic attack. Conversely, the lone pairs of electrons on the oxygen atom of the carbonyl group allow electrophilic attack to occur. Aldehydes and ketones can therefore undergo both nucleophilic attack (at the carbon atom) and electrophilic attack (at the oxygen atom). Nucleophilic attack occurs at the partially positively charged carbon of a carbonyl functional group. Electrophilic attack occurs at the lone pairs of electrons on the oxygen atom. Aldehydes and ketones react with many organometallic compounds that contain stabilized carbanions. One of the most important classes of such compounds are the Grignard reagents, organomagnesium compounds with the formula RMgX (X is Cl, Br, or I) that are so strongly polarized that they can be viewed as containing R and MgX+. These reagents are named for the French chemist Victor Grignard (1871–1935), who won a Nobel Prize in Chemistry in 1912 for their development. In a Grignard reaction, the carbonyl functional group is converted to an alcohol, and the carbon chain of the carbonyl compound is lengthened by the addition of the R group from the Grignard reagent. One example is reacting cyclohexylmagnesium chloride, a Grignard reagent, with formaldehyde: The nucleophilic carbanion of the cyclohexyl ring attacks the electrophilic carbon atom of the carbonyl group. Acidifying the solution results in protonation of the intermediate to give the alcohol. Aldehydes can also be prepared by reducing a carboxylic acid group (–CO2H) (part (a) in Figure 24.5.7), and ketones can be prepared by reacting a carboxylic acid derivative with a Grignard reagent. The former reaction requires a powerful reducing agent, such as a metal hydride. $$\PageIndex{1}$$ Explain how each reaction proceeds to form the indicated product. Given: chemical reaction Asked for: how products are formed Strategy: 1. Identify the functional group and classify the reaction. 2. Use the mechanisms described to propose the initial steps in the reaction. Solution: 1. A One reactant is an alcohol that undergoes a substitution reaction. B In the product, a bromide group is substituted for a hydroxyl group. The first step in this reaction must therefore be protonation of the –OH group of the alcohol by H+ of HBr, followed by the elimination of water to give the carbocation: The bromide ion is a good nucleophile that can react with the carbocation to give an alkyl bromide: 1. A One reactant is a Grignard reagent, and the other contains a carbonyl functional group. Carbonyl compounds act as electrophiles, undergoing nucleophilic attack at the carbonyl carbon. B The nucleophile is the phenyl carbanion of the Grignard reagent: The product is benzyl alcohol. Exercise $$\PageIndex{1}$$ Predict the product of each reaction. ## Carboxylic Acids The pungent odors of many carboxylic acids are responsible for the smells we associate with sources as diverse as Swiss cheese, rancid butter, manure, goats, and sour milk. The boiling points of carboxylic acids tend to be somewhat higher than would be expected from their molecular masses because of strong hydrogen-bonding interactions between molecules. In fact, most simple carboxylic acids form dimers in the liquid and even in the vapor phase. The four lightest carboxylic acids are completely miscible with water, but as the alkyl chain lengthens, they become more “alkane-like,” so their solubility in water decreases. Compounds that contain the carboxyl functional group are acidic because carboxylic acids can easily lose a proton: the negative charge in the carboxylate ion (RCO2) is stabilized by delocalization of the π electrons: As a result, carboxylic acids are about 1010 times more acidic than the corresponding simple alcohols whose anions (RO) are not stabilized through resonance. Carboxylic acids are typically prepared by oxidizing the corresponding alcohols and aldehydes (part (a) in Figure 24.5.7). They can also be prepared by reacting a Grignard reagent with CO2, followed by acidification: $\mathrm{CO_2+ RMgCl \xrightarrow{H_3O^+} RCO_2H + Mg^{2+}+ Cl^-+ H_2O} \tag{24.5.2}$ The initial step in the reaction is nucleophilic attack by the R group of the Grignard reagent on the electrophilic carbon of CO2: Delocalization of π bonding over three atoms (O–C–O) makes carboxylic acids and their derivatives less susceptible to nucleophilic attack than aldehydes and ketones with their single π bond. The reactions of carboxylic acids are dominated by two factors: their polar –CO2H group and their acidity. Reaction with strong bases, for example, produce carboxylate salts, such as sodium stearate: $RCO_2H + NaOH \rightarrow RCO_2^−Na^+ + H_2O \tag{24.5.3}$ where R is CH3(CH2)16. As you learned in previously long-chain carboxylate salts are used as soaps. Delocalization of π bonding over three atoms makes carboxylic acids and their derivatives less susceptible to nucleophilic attack as compared with aldehydes and ketones. ## Carboxylic Acid Derivatives Replacing the –OH of a carboxylic acid with groups that have different tendencies to participate in resonance with the C=O functional group produces derivatives with rather different properties. Resonance structures have significant effects on the reactivity of carboxylic acid derivatives, but their influence varies substantially, being least important for halides and most important for the nitrogen of amides. In this section, we take a brief look at the chemistry of two of the most familiar and important carboxylic acid derivatives: esters and amides. ### Esters Esters have the general formula RCO2R′, where R and R′ can be virtually any alkyl or aryl group. Esters are often prepared by reacting an alcohol (R′OH) with a carboxylic acid (RCO2H) in the presence of a catalytic amount of strong acid. The purpose of the acid (an electrophile) is to protonate the doubly bonded oxygen atom of the carboxylic acid (a nucleophile) to give a species that is more electrophilic than the parent carboxylic acid. The nucleophilic oxygen atom of the alcohol attacks the electrophilic carbon atom of the protonated carboxylic acid to form a new C–O bond. The overall reaction can be written as follows: Because water is eliminated, this is a dehydration reaction. If an aqueous solution of an ester and strong acid or base is heated, the reverse reaction will occur, producing the parent alcohol R′OH and either the carboxylic acid RCO2H (under strongly acidic conditions) or the carboxylate anion RCO2 (under basic conditions). As stated earlier, esters are familiar to most of us as fragrances, such as banana and pineapple. Other esters with intense aromas function as sex attractants, or pheromones, such as the pheromone from the oriental fruit fly. Research on using synthetic insect pheromones as a safer alternative to insecticides for controlling insect populations, such as cockroaches, is a rapidly growing field in organic chemistry. ### Amides In the general structure of an amide, the two substituents on the amide nitrogen can be hydrogen atoms, alkyl groups, aryl groups, or any combination of those species. Although amides appear to be derived from an acid and an amine, in practice they usually cannot be prepared by this synthetic route. In principle, nucleophilic attack by the lone electron pair of the amine on the carbon of the carboxylic acid could occur, but because carboxylic acids are weak acids and amines are weak bases, an acid–base reaction generally occurs instead: $RCO_2H + R′NH_2 \rightarrow RCO_2^− + R′NH_3^+ \tag{24.5.4}$ Amides are therefore usually prepared by the nucleophilic reaction of amines with more electrophilic carboxylic acid derivatives, such as esters. The lone pair of electrons on the nitrogen atom of an amide can participate in π bonding with the carbonyl group, thus reducing the reactivity of the amide (Figure 24.5.5) and inhibiting free rotation about the C–N bond. Amides are therefore the least reactive of the carboxylic acid derivatives. The stability of the amide bond is crucially important in biology because amide bonds form the backbones of peptides and proteins. The amide bond is also found in many other biologically active and commercially important molecules, including penicillin; urea, which is used as fertilizer; saccharin, a sugar substitute; and valium, a potent tranquilizer. Amides are the least reactive of the carboxylic acid derivatives because amides participate in π bonding with the carbonyl group. ## Amines Amines are derivatives of ammonia in which one or more hydrogen atoms have been replaced by alkyl or aryl groups. They are therefore analogous to alcohols and ethers. Like alcohols, amines are classified as primary, secondary, or tertiary, but in this case the designation refers to the number of alkyl groups bonded to the nitrogen atom, not to the number of adjacent carbon atoms. In primary amines, the nitrogen is bonded to two hydrogen atoms and one alkyl group; in secondary amines, the nitrogen is bonded to one hydrogen and two alkyl groups; and in tertiary amines, the nitrogen is bonded to three alkyl groups. With one lone pair of electrons and C–N bonds that are less polar than C–O bonds, ammonia and simple amines have much lower boiling points than water or alcohols with similar molecular masses. Primary amines tend to have boiling points intermediate between those of the corresponding alcohol and alkane. Moreover, secondary and tertiary amines have lower boiling points than primary amines of comparable molecular mass. Tertiary amines form cations analogous to the ammonium ion (NH4+), in which all four H atoms are replaced by alkyl groups. Such substances, called quaternary ammonium salts, can be chiral if all four substituents are different. (Amines with three different substituents are also chiral because the lone pair of electrons represents a fourth substituent.) Alkylamines can be prepared by nucleophilic substitution reactions of alkyl halides with ammonia or other amines: $RCl + NH_3 \rightarrow RNH_2 + HCl \tag{24.5.5}$ $RCl + R′NH_2 \rightarrow RR′NH + HCl \tag{24.5.6}$ $RCl + R′R″NH \rightarrow RR′R″N + HCl \tag{24.5.7}$ The primary amine formed in the first reaction (Equation 24.5.5) can react with more alkyl halide to generate a secondary amine (Equation 24.5.6), which in turn can react to form a tertiary amine (Equation 24.5.7). Consequently, the actual reaction mixture contains primary, secondary, and tertiary amines and even quaternary ammonium salts. The reactions of amines are dominated by two properties: their ability to act as weak bases and their tendency to act as nucleophiles, both of which are due to the presence of the lone pair of electrons on the nitrogen atom. Amines typically behave as bases by accepting a proton from an acid to form an ammonium salt, as in the reaction of triethylamine (the ethyl group is represented as Et) with aqueous HCl (the lone pair of electrons on nitrogen is shown): $Et_3N:(l) + HCl{(aq)} \rightarrow Et_3NH^+Cl^−_{(aq)} \tag{24.5.8}$ which gives triethylammonium chloride. Amines can react with virtually any electrophile, including the carbonyl carbon of an aldehyde, a ketone, or an ester. Aryl amines such as aniline (C6H5NH2) are much weaker bases than alkylamines because the lone pair of electrons on nitrogen interacts with the π bonds of the aromatic ring, delocalizing the lone pair through resonance (Figure 24.5.6). Note The reactions of amines are dominated by their ability to act as weak bases and their tendency to act as nucleophiles. Delocalization of the lone electron pair on N over the benzene ring reduces the basicity of aryl amines, such as aniline, compared with that of alkylamines, such as cyclohexylamine. These electrostatic potential maps show that the electron density on the N of cyclohexylamine is more localized than it is in aniline, which makes cyclohexylamine a stronger base. Example $$\PageIndex{2}$$ Predict the products formed in each reaction and show the initial site of attack and, for part (b), the final products. 1. C6H5CH2CO2H + KOH → Given: reactants Asked for: products and mechanism of reaction Strategy: Use the strategy outlined in Example 7. Solution: 1. The proton on the carboxylic acid functional group is acidic. Thus reacting a carboxylic acid with a strong base is an acid–base reaction, whose products are a salt—in this case, C6H5CH2CO2K+—and water. 2. The nitrogen of cyclohexylamine contains a lone pair of electrons, making it an excellent nucleophile, whereas the carbonyl carbon of ethyl acetate is a good electrophile. We therefore expect a reaction in which nucleophilic attack on the carbonyl carbon of the ester produces an amide and ethanol. The initial site of attack and the reaction products are as follows: Exercise $$\PageIndex{2}$$ Predict the products of each reaction. State the initial site of attack. 1. acetic acid with 1-propanol 2. aniline (C6H5NH2) with propyl acetate [CH3C(=O)OCH2CH2CH3] 1. Initial attack occurs with protonation of the oxygen of the carbonyl. The products are: 1. Initial attack occurs at the carbon of the carbonyl group. The products are: Reactions like we have discussed in this section are used to synthesize a wide range of organic compounds. When chemists plan the synthesis of an organic molecule, however, they must take into consideration various factors, such as the availability and cost of reactants, the need to minimize the formation of undesired products, and the proper sequencing of reactions to maximize the yield of the target molecule and minimize the formation of undesired products. Because the synthesis of many organic molecules requires multiple steps, in designing a synthetic scheme for such molecules, chemists must often work backward from the desired product in a process called retrosynthesis. Using this process, they can identify the reaction steps needed to synthesize the desired product from the available reactants. ## Summary • The physical properties and reactivity of compounds containing the common functional groups are intimately connected to their structures. There are strong connections among the structure, the physical properties, and the reactivity for compounds that contain the major functional groups. Hydrocarbons that are alkanes undergo catalytic cracking, which can convert straight-chain alkanes to highly branched alkanes. Catalytic cracking is one example of a pyrolysis reaction, in which the weakest bond is cleaved at high temperature, producing a mixture of radicals. The multiple bond of an alkene produces geometric isomers (cis and trans). Terminal alkynes contain a hydrogen atom directly attached to a triply bonded carbon. Removal of the hydrogen forms an acetylide ion, a potent nucleophile used to make longer carbon chains. Arenes undergo substitution rather than elimination because of enhanced stability from delocalization of their π electron density. An alcohol is often prepared by adding the elements of water across a double bond or by a substitution reaction. Alcohols undergo two major types of reactions: those involving cleavage of the O–H bond and those involving cleavage of the C–O bond. Phenols are acidic because of π interactions between the oxygen atom and the ring. Ethers are comparatively unreactive. Aldehydes and ketones are generally prepared by oxidizing alcohols. Their chemistry is characterized by nucleophilic attack at the carbon atom of the carbonyl functional group and electrophilic attack at the oxygen atom. Grignard reagents (RMgX, where X is Cl, Br, or I) convert the carbonyl functional group to an alcohol and lengthen the carbon chain. Compounds that contain the carboxyl functional group are weakly acidic because of delocalization of the π electrons, which causes them to easily lose a proton and form the carboxylate anion. Carboxylic acids are generally prepared by oxidizing alcohols and aldehydes or reacting a Grignard reagent with CO2. Carboxylic acid derivatives include esters, prepared by reacting a carboxylic acid and an alcohol, and amides, prepared by the nucleophilic reaction of amines with more electrophilic carboxylic acid derivatives, such as esters. Amides are relatively unreactive because of π bonding interactions between the lone pair on nitrogen and the carbonyl group. Amines can also be primary, secondary, or tertiary, depending on the number of alkyl groups bonded to the amine. Quaternary ammonium salts have four substituents attached to nitrogen and can be chiral. Amines are often prepared by a nucleophilic substitution reaction between a polar alkyl halide and ammonia or other amines. They are nucleophiles, but their base strength depends on their substituents. ## Conceptual Problems 1. Why do branched-chain alkanes have lower melting points than straight-chain alkanes of comparable molecular mass? 2. Describe alkanes in terms of their orbital hybridization, polarity, and reactivity. What is the geometry about each carbon of a straight-chain alkane? 3. Why do alkenes form cis and trans isomers, whereas alkanes do not? Do alkynes form cis and trans isomers? Why or why not? 4. Which compounds can exist as cis and trans isomers? 1. 2,3-dimethyl-1-butene 2. 3-methyl-1-butene 3. 2-methyl-2-pentene 4. 2-pentene 1. Which compounds can exist as cis and trans isomers? 1. 3-ethyl-3-hexene 2. 1,1-dichloro-1-propene 3. 1-chloro-2-pentene 4. 3-octene 1. Which compounds have a net dipole moment? 1. o-nitrotoluene 2. p-bromonitrobenzene 3. p-dibromobenzene 1. Why is the boiling point of an alcohol so much greater than that of an alkane of comparable molecular mass? Why are low-molecular-mass alcohols reasonably good solvents for some ionic compounds, whereas alkanes are not? 2. Is an alcohol a nucleophile or an electrophile? What determines the mode of reactivity of an alcohol? How does the reactivity of an alcohol differ from that of an ionic compound containing OH, such as KOH? 3. How does the reactivity of ethers compare with that of alcohols? Why? Ethers can be cleaved under strongly acidic conditions. Explain how this can occur. 4. What functional group is common to aldehydes, ketones, carboxylic acids, and esters? This functional group can react with both nucleophiles and electrophiles. Where does nucleophilic attack on this functional group occur? Where does electrophilic attack occur? 5. What key feature of a Grignard reagent allows it to engage in a nucleophilic attack on a carbonyl carbon? 6. Do you expect carboxylic acids to be more or less water soluble than ketones of comparable molecular mass? Why? 7. Because amides are formally derived from an acid plus an amine, why can they not be prepared by the reaction of an acid with an amine? How are they generally prepared? 8. Is an amide susceptible to nucleophilic attack, electrophilic attack, or both? Specify where the attack occurs. 9. What factors determine the reactivity of amines? 1. (c) and (d) 1. The presence of a nucleophilic Cδ− resulting from a highly polar interaction with an electropositive Mg 1. Their ability to act as weak bases and their tendency to act as nucleophiles ## Structure and Reactivity 1. What is the product of the reaction of 2-butyne with excess HBr? 2. What is the product of the reaction of 3-hexyne with excess HCl? 3. What elements are eliminated during the dehydrohalogenation of an alkyl halide? What products do you expect from the dehydrohalogenation of 2-chloro-1-pentene? 4. What elements are eliminated during the dehydration of an alcohol? What products do you expect from the dehydration of ethanol? 5. Predict the products of each reaction. 1. sodium phenoxide with ethyl chloride 2. 1-chloropropane with NaOH 1. Show the mechanism and predict the organic product of each reaction. 1. 2-propanol + HCl 2. cyclohexanol + H2SO4 1. A Grignard reagent can be used to generate a carboxylic acid. Show the mechanism for the first step in this reaction using CH3CH2MgBr as the Grignard reagent. What is the geometry about the carbon of the –CH2 of the intermediate species formed in this first step? 2. Draw a molecular orbital picture showing the bonding in an amide. What orbital is used for the lone pair of electrons on nitrogen? 3. What is the product of the reaction of 1. acetic acid with ammonia? 2. methyl acetate with ethylamine, followed by heat? 1. Develop a synthetic scheme to generate 1. 1,1-dichloroethane from 1,1-dibromoethane. 2. 2-bromo-1-heptene from 1-bromopentane. 1. 2,2-dibromobutane 1. C6H5OC2H5 + NaCl 2. 1-propanol + NaCl 1. CH3CO2 NH4+ (an acid-base reaction) 2. CH3CONHC2H5 + CH3OH
2021-03-07 00:45:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5592371821403503, "perplexity": 6204.217528484294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375529.62/warc/CC-MAIN-20210306223236-20210307013236-00464.warc.gz"}
http://www.pega-analytics.co.uk/blog/box-jenkins-and-jmp/
# Box-Jenkins, and JMP There are two main classes of model for time series data – autoregressive (AR) and moving average (MA).  The generalisation of the two is referred to as ARIMA – autoregressive integrated moving average.  These models are sometimes referred to as Box-Jenkins models, but more accurately the term “Box-Jenkins” refers to a methodology for model selection. ## The Nature of Time Series A time series is a one-dimensional set of data sequenced by time. Based on this sequence time series analysis seeks to  make inferences about future values.  To do this a model must be constructed, and that model needs to exploit the serial dependencies within the data.  This is very different to ordinary least squares where observations are assumed to be independent of each other. If it is a hot day today then you’ll probably have a reasonable expectation that it will be hot tomorrow There are two mechanisms (or processes) through which dependencies can arise.  I’ll try and illustrate the first with respect to the weather.  If it’s a hot day today then you’ll probably have a reasonable expectation that it will hot tomorrow (this probably works if you live in California, if you live in the UK replace ‘today’ and ‘tomorrow’ with ‘now’ and ‘in one hours time’!). If this  expectation is correct then today’s weather will also be related to yesterdays weather; logically therefore tomorrow’s weather is dependent on the weather of both today and yesterday.  This leads to an autoregressive model where there is correlation with the  prior data points.  The parameters of an autoregressive model quantify these levels of correlation. We might expect the correlation to weaken as we look further back in time. Later we’ll see that this manifests itself as a decaying envelop containing correlation values. That paragraph can be summarised much more effectively with mathematical notation: $y_t = \phi_{t-1}y_{t-1} + \phi_{t-2}y_{t-2} + \cdots + \phi_{t-p}y_{t-p} + \varepsilon_t$ This is an autoregressive model of order p. As with any statistical model it contains signal (y) components and noise (ε) components.  With the autoregressive model the value of one data point is dependent on the prior values of the signal whereas with the a moving average model the value is dependent on the prior values of the noise: $y_t = \varepsilon_t + \theta_{t-1}\varepsilon_{t-1} + \theta_{t-2}\varepsilon_{t-2} + \cdots + \theta_{t-p}\varepsilon_{t-p}$ ## Model Identification This can be performed by looking at an autocorrelation function and a partial correlation function. One, if not the, key step in the creation of a time series model is to determine whether an autoregressive (AR) or moving average (MA) model should be used to explain the serial correlation within the data.  This can be performed by looking at an autocorrelation function (ACF) and a partial correlation function (PACF). Fortunately I can explain the utility of these functions using the graphical output of the JMP Time Series platform without any further reference to the underlying mathematics! Let’s take some data that I know to be a second-order autoregressive average model (I know because I have a script to generate the data).  Here is the body of the output from the JMP Time Series platform: On the left there is an autocorrelation function (ACF) and on the right there is the partial autocorrelation function (PACF).  It’s not my intention to go into the mathematics.  The way we use these graphs is similar to residual analysis on ordinary regression. If I have an autoregressive model then I expect the magnitude of the ACF correlations to decay smoothly as the lag increases.  At the same time I expect the PACF correlations to abruptly reduce at the point where there is no further significant correlation.  For this data there are three large correlation bars on the PACF.  But note the third bar corresponds to a lag of 2 not 3.  The  first bar represents a lag of zero – the correlation of a data point with itself (do I really want to see this?). Now let’s look how the output differs for a second order-moving average model. The exponential decay of the ACF function, indicative of a moving average process, is not present.  For an autoregressive model, it is is ACF rather than the PACF that indicates the order of the model. ## Building the Model I will build a model based on the last set of data, where the inspection of the correlations leads me to postulate a second order moving average model. Once the order of the model is specified JMP will estimate the model terms. Now I have a model  I want to look at correlation plots again, but this time with respect to the residuals.  My goal is to generate a set of residuals that exhibit no correlation.  In this example it looks like I have achieved my goal: Whether we want to assess model improvements as we iteratively refine the model, or whether there are competing models with different structure, JMP provides a table of model performance metrics to aid model comparison: The above table shows that I was correct to select an MA(2) model as opposed to an AR(2) model, and that the second order model is significantly better than a first order model. In general a time series model may contain both AR and MA components.  But both of these model types are based on a time series with constant mean.  This requirement is violated if there is either a trend or seasonal variation. Time series model are typically built up iteratively to take account of the diffferent components.  Trends and seasonality are taken into a account by a process known as differencing. new to version 12 of JMP Version 12 of JMP introduces new decomposition methods to remove trend and seasonal effects including the X-11 method developed by the US Bueau of the Census. If you have opened the Time Series platform in the past and been intimidated by the output then I hope that this has served as a useful introduction to the graphical output that is produced by JMP in support of Box-Jenkins methodology.  Why not take some time to take another look, and check out the features new to version 12 of JMP, found under the decomposition menu. Share the joy: ## One thought on “Box-Jenkins, and JMP” 1. Corrie says: klass webbplatsen . Tack .
2018-09-22 01:56:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7532865405082703, "perplexity": 994.8868295216496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158001.44/warc/CC-MAIN-20180922005340-20180922025740-00041.warc.gz"}
https://testbook.com/question-answer/find-the-determinant-of-the-matrixleft--5fa287f94f21d228a6fe815c
# Find the determinant of the matrix $$\left| {\begin{array}{*{20}{c}} 3&2&1\\ 3&2&1\\ 1&0&1 \end{array}} \right|$$ ? This question was previously asked in Airforce Group X 4 November 2020 Memory Based Paper View all Airforce Group X Papers > 1. 0 2. 3 3. 5 4. None of these Option 1 : 0 Free Group X 2021 Full Mock Test 80603 70 Questions 70 Marks 60 Mins ## Detailed Solution CONCEPT: Properties of Determinant of a Matrix: • If each entry in any row or column of a determinant is 0, then the value of the determinant is zero. • For any square matrix say A, |A| = |AT|. • If we interchange any two rows (columns) of a matrix, then the determinant is multiplied by -1. • If any two rows (columns) of a matrix are same then the value of the determinant is zero. CALCULATION: Here, we have to find the value of $$\left| {\begin{array}{*{20}{c}} 3&2&1\\ 3&2&1\\ 1&0&1 \end{array}} \right|$$ As we can see that the first and the second row of the given matrix are equal. We know that, if any two rows (columns) of a matrix are same then the value of the determinant is zero. So, $$\left| {\begin{array}{*{20}{c}} 3&2&1\\ 3&2&1\\ 1&0&1 \end{array}} \right| = 0$$ Hence, option A is the correct answer.
2021-10-26 06:33:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8294997215270996, "perplexity": 520.7083732561374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00017.warc.gz"}
https://codegolf.stackexchange.com/questions/28169/four-squares-together/28580
# Four Squares Together Lagrange's four square theorem tells us any natural number can be represented as the sum of four square numbers. Your task is to write a program that does this. Input: A natural number (below 1 billion) Output: Four numbers whose squares sum to that number (order doesn't matter) Note: You don't have to do a brute force search! Details here and here. If there is a function that trivializes this problem (I will determine), it's not allowed. Automated prime functions and square root are allowed. If there is more than one representation any one is fine. If you chose to do brute force, it must run within reasonable time (3 minutes) sample input 123456789 sample output (either is fine) 10601 3328 2 0 10601 3328 2 • I may do brute force though if it makes my code shorter? – Martin Ender May 17 '14 at 23:59 • @m.buettner Yes, but it should handle large numbers – qwr May 18 '14 at 0:01 • @m.buettner Read the post, any natural number below 1 billion – qwr May 18 '14 at 0:09 • Ah sorry overlooked that. – Martin Ender May 18 '14 at 0:12 • @Dennis Natural numbers in this case do not include 0. – qwr May 22 '14 at 19:56 # CJam, 50 bytes li:NmF0=~2/#:J(_{;)__*N\-[{_mqJ/iJ*__*@\-}3*])}g+p My third (and final, I promise) answer. This approach is based heavily on primo's answer. Try it online in the CJam interpreter. $cjam 4squares.cjam <<< 999999999 [189 31617 567 90] ### Background 1. After seeing primo's updated algorithm, I had to see how a CJam implementation would score: li{W):W;:N4md!}g;Nmqi)_{;(__*N\-[{_mqi__*@\-}3*])}g+2W#f*p Only 58 bytes! This algorithm is performs in nearly constant time and doesn't exhibit much variation for different values of N. Let's change that... 2. Instead of starting at floor(sqrt(N)) and decrementing, we can start at 1 and increment. This saves 4 bytes. li{W):W;:N4md!}g;0_{;)__*N\-[{_mqi__*@\-}3*])}g+2W#f*p 3. Instead of expressing N as 4**a * b, we can express it as p**(2a) * b – where p is the smallest prime factor of N – to save 1 more byte. li_mF0=~2/#:J_*/:N!_{;)__*N\-[{_mqi__*@\-}3*])}g+Jf*p 4. The previous modification allows us to slightly change the implementation (without touching the algorithm itself): Instead of dividing N by p**(2a) and multiply the solution by p**a, we can directly restrict the possible solutions to multiples of p**a. This saves 2 more bytes. li:NmF0=~2/#:J!_{;J+__*N\-[{_mqJ/iJ*__*@\-}3*])}g+ 5. Not restricting the first integer to multiples of p**a saves an additional byte. li:NmF0=~2/#:J(_{;)__*N\-[{_mqJ/iJ*__*@\-}3*])}g+ ### Final algorithm 1. Find a and b such that N = p**(2a) * b, where b is not a multiple of p**2 and p is the smallest prime factor of N. 2. Set j = p**a. 3. Set k = floor(sqrt(N - j**2) / A) * A. 4. Set l = floor(sqrt(N - j**2 - k**2) / A) * A. 5. Set m = floor(sqrt(N - j**2 - k**2 - l**2) / A) * A. 6. If N - j**2 - k**2 - l**2 - m**2 > 0, set j = j + 1 and go back to step 3. This can be implemented as follows: li:N " Read an integer from STDIN and save it in “N”. "; mF " Push the factorization of “N”. Result: [ [ p1 a1 ] ... [ pn an ] ] "; 0=~ " Push “p1” and “a1”. “p1” is the smallest prime divisor of “N”. "; 2/#:J " Compute p1**(a1/2) and save the result “J”. "; (_ " Undo the first two instructions of the loop. "; { " "; ;)_ " Pop and discard. Increment “J” and duplicate. "; _*N\- " Compute N - J**2. "; [{ " "; _mqJ/iJ* " Compute K = floor(sqrt(N - J**2)/J)*J. "; __*@ " Duplicate, square and rotate. Result: K K**2 N - J**2 "; \- " Swap and subtract. Result: K N - J**2 - K**2 "; }3*] " Do the above three times and collect in an array. "; ) " Pop the array. Result: N - J**2 - K**2 - L**2 - M**2 "; }g " If the result is zero, break the loop. "; +p " Unshift “J” in [ K L M ] and print a string representation. "; ### Benchmarks I've run all 5 versions over all positive integers up to 999,999,999 on my Intel Core i7-3770, measured execution time and counted the iterations required to find a solution. The following table shows the average number of iterations and execution time for a single integer: Version | 1 | 2 | 3 | 4 | 5 ----------------------+---------+---------+---------+---------+--------- Number of iterations | 4.005 | 28.31 | 27.25 | 27.25 | 41.80 Execution time [µs] | 6.586 | 39.69 | 55.10 | 63.99 | 88.81 1. At only 4 iterations and 6.6 micro​seconds per integer, primo's algorithm is incredibly fast. 2. Starting at floor(sqrt(N)) makes more sense, since this leaves us with smaller values for the sum of the remaining three squares. As expected, starting at 1 is a lot slower. 3. This is a classical example of a badly implemented good idea. To actually reduce the code size, we rely on mF, which factorizes the integer N. Although version 3 requires less iterations than version 2, it is a lot slower in practice. 4. Although the algorithm does not change, version 4 is a lot slower. This is because it performs an additional floating point division and an integer multiplication in each iteration. 5. For the input N = p**(2a) ** b, algorithm 5 will require (k - 1) * p**a + 1 iterations, where k is the number of iterations algorithm 4 requires. If k = 1 or a = 0, this makes no difference. However, any input of the form 4**a * (4**c * (8 * d + 7) + 1) may perform quite badly. For the starting value j = p**a, N - 4**a = 4**(a + c) * (8 * d + 7), so it cannot be expressed as a sum of three squares. Thus, k > 1 and at least p**a iterations are required. Thankfully, primo's original algorithm is incredibly fast and N < 1,000,000,000. The worst case I could find by hand is 265,289,728 = 4**10 * (4**1 * (7 * 8 + 7) + 1), which requires 6,145 iterations. Execution time is less than 300 ms on my machine. On average, this version is 13.5 times slower than the implementation of primo's algorithm. • "Instead of expressing N as 4**a * b, we can express it as p**(2a) * b." This is actually an improvement. I would have liked to have included this, but it was a lot longer (the ideal is to find the largest perfect square factor). "Starting with 1 and incrementing saves 4 bytes." This is definitely slower. The runtime for any given range is 4-5 times as long. "All positive integers up to 999,999,999 took 24.67 hours, giving an average execution time of 0.0888 milliseconds per integer." Perl only took 2.5 hours to crunch the whole range, and the Python translation is 10x faster ;) – primo May 25 '14 at 8:43 • @primo: Yes, you're right. Dividing by p**a is an improvement, but it's a small one. Dividing by the largest perfect square factor makes a big difference when starting from 1; it's still an improvement when starting from the integer part of the square root. Implementing it would cost only two more bytes. The abysmal execution time seems to be due to my unimprovements, not CJam. I'll rerun the tests for all algorithms (including the one you proposed), counting iterations rather than measuring wall time. Let's see how long that takes... – Dennis May 25 '14 at 16:33 • Finding the largest square factor only costs 2 additional bytes?! What kind of sorcery is this? – primo May 26 '14 at 4:49 • @primo: If the integer is on the stack, 1\ swaps it with 1 (accumulator), mF pushes its factorization and {~2/#*}/ raises every prime factor to its exponent divided by two, then multiplies it with the accumulator. For the direct implementation of your algorithm, that only adds 2 bytes. The small difference is mainly due to the awkward way I had to find the exponent of 4, since CJam doesn't (seem to) have a while loop... – Dennis May 26 '14 at 6:24 • Anyway, the benchmark finished. The total number of iterations required to factorize all 1,000,000 integers without finding the largest square factor is 4,004,829,417, with an execution time of 1.83 hours. Dividing by the largest square factor reduces the iteration count to 3,996,724,799, but it increases the time to 6.7 hours. Looks like factorizing takes a lot more time than finding the squares... – Dennis May 26 '14 at 6:25 # FRACTRAN: 156 98 fractions Since this is a classic number theory problem, what better way to solve this than to use numbers! 37789/221 905293/11063 1961/533 2279/481 57293/16211 2279/611 53/559 1961/403 53/299 13/53 1/13 6557/262727 6059/284321 67/4307 67/4661 6059/3599 59/83 1/59 14279/871933 131/9701 102037079/8633 14017/673819 7729/10057 128886839/8989 13493/757301 7729/11303 89/131 1/89 31133/2603 542249/19043 2483/22879 561731/20413 2483/23701 581213/20687 2483/24523 587707/21509 2483/24797 137/191 1/137 6215941/579 6730777/965 7232447/1351 7947497/2123 193/227 31373/193 23533/37327 5401639/458 229/233 21449/229 55973/24823 55973/25787 6705901/52961 7145447/55973 251/269 24119/251 72217/27913 283/73903 281/283 293/281 293/28997 293/271 9320827/58307 9831643/75301 293/313 28213/293 103459/32651 347/104807 347/88631 337/347 349/337 349/33919 349/317 12566447/68753 13307053/107143 349/367 33197/349 135199/38419 389/137497 389/119113 389/100729 383/389 397/383 397/39911 397/373 1203/140141 2005/142523 2807/123467 4411/104411 802/94883 397/401 193/397 1227/47477 2045/47959 2863/50851 4499/53743 241/409 1/241 1/239 Takes in input of the form 2n × 193 and outputs 3a × 5b × 7c × 11d. Might run in 3 minutes if you have a really good interpreter. Maybe. ...okay, not really. This seemed to be such a fun problem to do in FRACTRAN that I had to try it. Obviously, this isn't a proper solution to the question as it doesn't make the time requirements (it brute forces) and it's barely even golfed, but I thought I'd post this here because it's not every day that a Codegolf question can be done in FRACTRAN ;) ## Hint The code is equivalent to the following pseudo-Python: a, b, c, d = 0, 0, 0, 0 def square(n): # Returns n**2 def compare(a, b): # Returns (0, 0) if a==b, (1, 0) if a<b, (0, 1) if a>b def foursquare(a, b, c, d): # Returns square(a) + square(b) + square(c) + square(d) while compare(foursquare(a, b, c, d), n) != (0, 0): d += 1 if compare(c, d) == (1, 0): c += 1 d = 0 if compare(b, c) == (1, 0): b += 1 c = 0 d = 0 if compare(a, b) == (1, 0): a += 1 b = 0 c = 0 d = 0 # Mathematica 61 66 51 Three methods are shown. Only the first approach meets the time requirement. ## 1-FindInstance (51 char) This returns a single solution the equation. FindInstance[a^2 + b^2 + c^2 + d^2 == #, {a, b, c, d}, Integers] & Examples and timings FindInstance[a^2 + b^2 + c^2 + d^2 == 123456789, {a, b, c, d}, Integers] // AbsoluteTiming {0.003584, {{a -> 2600, b -> 378, c -> 10468, d -> 2641}}} FindInstance[a^2 + b^2 + c^2 + d^2 == #, {a, b, c, d}, Integers] &[805306368] {0.004437, {{a -> 16384, b -> 16384, c -> 16384, d -> 0}}} ## 2-IntegerPartitions This works also, but is too slow to meet the speed requirement. f@n_ := Sqrt@IntegerPartitions[n, {4}, Range[0, Floor@Sqrt@n]^2, 1][[1]] Range[0, Floor@Sqrt@n]^2 is the set of all squares less than the square root of n (the largest possible square in the partition). {4} requires the integer partitions of n consist of 4 elements from the above-mentioned set of squares. 1, within the function IntegerPartitions returns the first solution. [[1]] removes the outer braces; the solution was returned as a set of one element. f[123456] {348, 44, 20, 4} ## 3-PowerRepresentations PowerRepresentations returns all of the solutions to the 4 squares problem. It can also solve for sums of other powers. PowersRepresentations returns, in under 5 seconds, the 181 ways to express 123456789 as the sum of 4 squares: n= 123456; PowersRepresentations[n, 4, 2] //AbsoluteTiming However, it is far too slow for other sums. • Wow, Mathematica does the brute force fast. Is IntegerPartitions doing something much more clever than trying every combination, like DFT convolution on the sets? The specs ask for the numbers, by the way, not their squares. – xnor May 21 '14 at 11:31 • I think Mathematica uses brute force, but probably has optimized IntegerPartitions. As you can see from the timings, the speed varies greatly depending upon whether the first (largest) number is close to the square root of n. Thanks for catching the spec violation in the earlier version. – DavidC May 21 '14 at 11:44 • Could you benchmark f[805306368]? Without dividing by powers of 4 first, my solution takes 0.05 s for 999999999; I've aborted the benchmark for 805306368 after 5 minutes... – Dennis May 21 '14 at 13:32 • f[805306368] returns {16384, 16384, 16384} after 21 minutes. I used {3} in place of {4}. The attempt to solve it with a sum of 4 non-zero squares was unsuccessful after several hours of running. – DavidC May 22 '14 at 18:41 • I don't have access to Mathematica, but from what I've read in the documentation center, IntegerPartitions[n,4,Range[Floor@Sqrt@n]^2 should work as well. However, I don't think you should use method 1 for your score, since it doesn't comply with the time limit specified in the question. – Dennis May 22 '14 at 22:25 ## Perl - 116 bytes 87 bytes (see update below) #!perl -p$.<<=1,$_>>=2until$_&3; {$n=$_;@a=map{$n-=$a*($a-=$_%($b=1|($a=0|sqrt$n)>>1));$_/=$b;$a*$.}($j++)x4;$n&&redo}$_="@a" Counting the shebang as one byte, newlines added for horizontal sanity. Something of a combination submission. The average (worst?) case complexity seems to be O(log n) O(n0.07). Nothing I've found runs slower than 0.001s, and I've checked the entire range from 900000000 - 999999999. If you find anything that takes significantly longer than that, ~0.1s or more, please let me know. Sample Usage $echo 123456789 | timeit perl four-squares.pl 11110 157 6 2 Elapsed Time: 0:00:00.000$ echo 1879048192 | timeit perl four-squares.pl 32768 16384 16384 16384 Elapsed Time: 0:00:00.000 $echo 999950883 | timeit perl four-squares.pl 31621 251 15 4 Elapsed Time: 0:00:00.000 The final two of these seem to be worst case scenerios for other submissions. In both instances, the solution shown is quite literally the very first thing checked. For 123456789, it's the second. If you want to test a range of values, you can use the following script: use Time::HiRes qw(time);$t0 = time(); # enter a range, or comma separated list here for (1..1000000) { $t1 = time();$initial = $_;$j = 0; $i = 1;$i<<=1,$_>>=2until$_&3; {$n=$_;@a=map{$n-=$a*($a-=$_%($b=1|($a=0|sqrt$n)>>1));$_/=$b;$a*$i}($j++)x4;$n&&redo} printf("%d: @a, %f\n",$initial, time()-$t1) } printf('total time: %f', time()-$t0); Best when piped to a file. The range 1..1000000 takes about 14s on my computer (71000 values per second), and the range 999000000..1000000000 takes about 20s (50000 values per second), consistent with O(log n) average complexity. ## Update Edit: It turns out that this algorithm is very similar to one that has been used by mental calculators for at least a century. Since originally posting, I have checked every value on the range from 1..1000000000. The 'worst case' behavior was exhibited by the value 699731569, which tested a grand total of 190 combinations before arriving at a solution. If you consider 190 to be a small constant - and I certainly do - the worst case behavior on the required range can be considered O(1). That is, as fast as looking up the solution from a giant table, and on average, quite possibly faster. Another thing though. After 190 iterations, anything larger than 144400 hasn't even made it beyond the first pass. The logic for the breadth-first traversal is worthless - it's not even used. The above code can be shortened quite a bit: #!perl -p $.*=2,$_/=4until$_&3; @a=map{$=-=$%*($%=$=**.5-$_);$%*$.}$j++,(0)x3while$=&&=$_;$_="@a" Which only performs the first pass of the search. We do need to confirm that there aren't any values below 144400 that needed the second pass, though: for (1..144400) { $initial =$_; # reset defaults $.=1;$j=undef;$==60;$.*=2,$_/=4until$_&3; @a=map{$=-=$%*($%=$=**.5-$_);$%*$.}$j++,(0)x3while$=&&=$_; # make sure the answer is correct $t=0;$t+=$_*$_ for @a; $t ==$initial or die("answer for $initial invalid: @a"); } In short, for the range 1..1000000000, a near-constant time solution exists, and you're looking at it. ## Updated Update @Dennis and I have made several improvements this algorithm. You can follow the progress in the comments below, and subsequent discussion, if that interests you. The average number of iterations for the required range has dropped from just over 4 down to 1.229, and the time needed to test all values for 1..1000000000 has been improved from 18m 54s, down to 2m 41s. The worst case previously required 190 iterations; the worst case now, 854382778, needs only 21. The final Python code is the following: from math import sqrt # the following two tables can, and should be pre-computed qqr_144 = set([ 0, 1, 2, 4, 5, 8, 9, 10, 13, 16, 17, 18, 20, 25, 26, 29, 32, 34, 36, 37, 40, 41, 45, 49, 50, 52, 53, 56, 58, 61, 64, 65, 68, 72, 73, 74, 77, 80, 81, 82, 85, 88, 89, 90, 97, 98, 100, 101, 104, 106, 109, 112, 113, 116, 117, 121, 122, 125, 128, 130, 133, 136, 137]) # 10kb, should fit entirely in L1 cache Db = [] for r in range(72): S = bytearray(144) for n in range(144): c = r while True: v = n - c * c if v%144 in qqr_144: break if r - c >= 12: c = r; break c -= 1 S[n] = r - c Db.append(S) qr_720 = set([ 0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 145, 160, 169, 180, 196, 225, 241, 244, 256, 265, 289, 304, 324, 340, 361, 369, 385, 400, 409, 436, 441, 481, 484, 496, 505, 529, 544, 576, 580, 585, 601, 625, 640, 649, 676]) # 253kb, just barely fits in L2 of most modern processors Dc = [] for r in range(360): S = bytearray(720) for n in range(720): c = r while True: v = n - c * c if v%720 in qr_720: break if r - c >= 48: c = r; break c -= 1 S[n] = r - c Dc.append(S) def four_squares(n): k = 1 while not n&3: n >>= 2; k <<= 1 odd = n&1 n <<= odd a = int(sqrt(n)) n -= a * a while True: b = int(sqrt(n)) b -= Db[b%72][n%144] v = n - b * b c = int(sqrt(v)) c -= Dc[c%360][v%720] if c >= 0: v -= c * c d = int(sqrt(v)) if v == d * d: break n += (a<<1) - 1 a -= 1 if odd: if (a^b)&1: if (a^c)&1: b, c, d = d, b, c else: b, c = c, b a, b, c, d = (a+b)>>1, (a-b)>>1, (c+d)>>1, (c-d)>>1 a *= k; b *= k; c *= k; d *= k return a, b, c, d This uses two pre-computed correction tables, one 10kb in size, the other 253kb. The code above includes the generator functions for these tables, although these should probably be computed at compile time. A version with more modestly sized correction tables can be found here: http://codepad.org/1ebJC2OV This version requires an average of 1.620 iterations per term, with a worst case of 38, and the entire range runs in about 3m 21s. A little bit of time is made up for, by using bitwise and for b correction, rather than modulo. ### Improvements Even values are more likely to produce a solution than odd values. The mental calculation article linked to previously notes that if, after removing all factors of four, the value to be decomposed is even, this value can be divided by two, and the solution reconstructed: Although this might make sense for mental calculation (smaller values tend to be easier to compute), it doesn't make much sense algorithmically. If you take 256 random 4-tuples, and examine the sum of the squares modulo 8, you will find that the values 1, 3, 5, and 7 are each reached on average 32 times. However, the values 2 and 6 are each reached 48 times. Multiplying odd values by 2 will find a solution, on average, in 33% less iterations. The reconstruction is the following: Care needs to be taken that a and b have the same parity, as well as c and d, but if a solution was found at all, a proper ordering is guaranteed to exist. Impossible paths don't need to be checked. After selecting the second value, b, it may already be impossible for a solution to exist, given the possible quadratic residues for any given modulo. Instead of checking anyway, or moving onto the next iteration, the value of b can be 'corrected' by decrementing it by the smallest amount that could possibly lead to a solution. The two correction tables store these values, one for b, and the other for c. Using a higher modulo (more accurately, using a modulo with relatively fewer quadratic residues) will result in a better improvement. The value a doesn't need any correction; by modifying n to be even, all values of a are valid. • This is incredible! The final algorithm is probably the simplest of all the answers, yet 190 iterations are all it takes... – Dennis May 24 '14 at 5:38 • @Dennis I would be very suprised if it hasn't been mentioned elsewhere. It seems too simple to have been over-looked. – primo May 24 '14 at 15:12 • 1. I'm curious: Did any of the test values in your complexity analysis require the breadth-first traversal? 2. The Wikipedia article you linked to is a little confusing. It mentions the Rabin-Shallit algorithm, but provides an example for an entirely different one. 3. It would be interesting to see when exactly the Rabin-Shallit algorithm would outperform yours, I'd imagine the primality tests are rather expensive in practice. – Dennis May 26 '14 at 20:21 • 1. Not one. 2. This is where I got my information (i.e. that this algorithm exists); I haven't seen the analysis, or even read the paper. 3. The curve becomes so steep at around 1e60, that it really wouldn't matter how 'slow' the O(log²n) is, it will still cross at about that point. – primo May 27 '14 at 0:17 • The second link in the question explains how to implement Rabin-Shallit, but it doesn't talk about the complexity. This answer on MathOverflow gives a nice summary of the paper. By the way, you rediscovered an algorithm used by Gottfried Ruckle in 1911 (link). – Dennis May 27 '14 at 2:39 # Python 3 (177) N=int(input()) k=1 while N%4<1:N//=4;k*=2 n=int(N**.5) R=range(int(2*n**.5)+1) print([(a*k,b*k,c*k,d*k)for d in R for c in R for b in R for a in[n,n-1]if a*a+b*b+c*c+d*d==N][0]) After we reduce the input N to be not divisible by 4, it must be expressible as a sum of four squares where one of them is either the largest possible value a=int(N**0.5) or one less than that, leaving only a small remainder for sum of the three other squares to take care of. This greatly reduces the search space. Here's a proof later this code always finds a solution. We wish to find an a so that n-a^2 is the sum of three squares. From Legendre's Three-Square Theorem, a number is the sum of three squares unless it is the form 4^j(8*k+7). In particular, such numbers are either 0 or 3 (modulo 4). We show that no two consecutive a can make the leftover amount N-a^2 have such a shape for both consecutive values.. We can do so by simply making a table of a and N modulo 4, noting that N%4!=0 because we've extracted all powers of 4 out of N. a%4= 0123 +---- 1|1010 N%4= 2|2121 <- (N-a*a)%4 3|3232 Because no two consecutive a give (N-a*a)%4 in [0,3], one of them is safe to use. So, we greedily use the largest possible n with n^2<=N, and n-1. Since N<(n+1)^2, the remainder N-a^2 to be represented as a sum of three squares is at most (n+1)^2 -(n-1)^2, which equals 4*n. So, it suffices to check only values up to 2*sqrt(n), which is exactly the range R. One could further optimize running time by stopping after a single solution, computing rather than iterating for the last value d, and searching only among values b<=c<=d. But, even without these optimizations, the worst instance I could find finished in 45 seconds on my machine. The chain of "for x in R" is unfortunate. It can probably be shortened by string substitution or replacement by iterating over a single index that encodes (a,b,c,d). Importing itertools turned out not worth it. Edit: Changed to int(2*n**.5)+1 from 2*int(n**.5)+2 to make argument cleaner, same character count. • This doesn't work for me... 5 => (2, 1, 0, 0) – Harry Beadle May 18 '14 at 10:59 • Strange, it works for me: I get 5 => (2, 1, 0, 0) running on Ideone 3.2.3 or in Idle 3.2.2. What do you get? – xnor May 18 '14 at 19:58 • @xnor BritishColour gets 5 => (2, 1, 0, 0). Did you even read the comment? (Now we have 3 comments in a row that have that code snippet. Can we keep the streak going?) – Justin May 19 '14 at 5:56 • @Quincunx If we are to decipher 5 => (2, 1, 0, 0), it means 2^2 + 1^2 + 0^2 + 0^2 = 5. So, yes, we can. – HostileFork says dont trust SE May 19 '14 at 6:03 • Quincunx, I read @BritishColour's comment, and as far as I can see, 5 => (2, 1, 0, 0) is correct. The examples in the question consider 0^2=0 to be a valid square number. Therefore I interpreted (as I think xnor did) that British Colour got something else. British colour, as you hav not responded again, can we assume that you do in fact get 2,1,0,0? – Level River St May 19 '14 at 10:41 # CJam, 919074 71 bytes q~{W):W;:N4md!}gmqi257:B_**_{;)_[Bmd\Bmd]_N\{_*-}/mq_i@+\1%}g{2W#*}%\; Compact, but slower than my other approach. Try it online! Paste the Code, type the desired integer in Input and click Run. ### Background This post started as a 99 byte GolfScript answer. While there was still room for improvement, GolfScript lacks built-in sqrt function. I kept the GolfScript version until revision 5, since it was very similar to the CJam version. However, the optimizations since revision 6 require operators that are not available in GolfScript, so instead of posting separate explanations for both languages, I decided to drop the less competitive (and much slower) version. The implemented algorithm computes the numbers by brute force: 1. For input m, find N and W such that m = 4**W * N. 2. Set i = 257**2 * floor(sqrt(N/4)). 3. Set i = i + 1. 4. Find integers j, k, l such that i = 257**2 * j + 257 * k + l, where k, l < 257. 5. Check if d = N - j**2 - k**2 - l**2 is a perfect square. 6. If it isn't, and go back to step 3. 7. Print 2**W * j, 2**W * k, 2**W * l, 2**W * sqrt(m). ### Examples $ TIME='\n%e s' time cjam lagrange.cjam <<< 999999999 [27385 103 15813 14] 0.46 s {;;(.^3$\-r;)8%!}do-1...{;;;)..252/@252%^@^@+4$\-v^@-}do 5$]{f*}%-4> Fast, but lengthy. The newline can be removed. Try it online. Note that the online interpreter has a 5 second time limit, so it might not work for all numbers. ### Background The algorithm takes advantage of Legendre's three-square theorem, which states that every natural number n that is not of the form can be expressed as the sum of three squares. The algorithm does the following: 1. Express the number as 4**i * j. 2. Find the largest integer k such that k**2 <= j and j - k**2 satisfies the hypothesis of Legendre's three-square theorem. 3. Set i = 0. 4. Check if j - k**2 - (i / 252)**2 - (i % 252)**2 is a perfect square. 5. If it isn't, increment i and go back to step 4. ### Examples $ TIME='%e s' time golfscript legendre.gs <<< 0 [0 0 0 0] 0.02 s $TIME='%e s' time golfscript legendre.gs <<< 123456789 [32 0 38 11111] 0.02 s$ TIME='%e s' time golfscript legendre.gs <<< 999999999 [45 1 217 31622] 0.03 s $TIME='%e s' time golfscript legendre.gs <<< 805306368 [16384 0 16384 16384] 0.02 s Timings correspond to an Intel Core i7-4700MQ. ### How it works ~ # Interpret the input string. Result: “n” { # . # Duplicate the topmost stack item. [ # 4* # Multiply it by four. { # 4/ # Divide by four. .. # Duplicate twice. 4%1$ # Compute the modulus and duplicate the number. !|! # Push 1 if both are truthy. }do # Repeat if the number is divisible by four and non-zero. ] # Collect the pushed values (one per iteration) into an array. )\ # Pop the last element from the array and swap it with the array. }:r~ # Save this code block as “r” and execute it. ,(2\? # Get the length of the array, decrement it and exponentiate. :f; # Save the result in “f”. # The topmost item on the stack is now “j”, which is not divisible by # four and satisfies that n = f**2 * j. { # {..*}:^~ # Save a code block to square a number in “^” and execute it. 4-1?? # Raise the previous number to the power of 1/4. # The two previous lines compute (x**2)**(1/4), which is sqrt(abs(x)). n*, # Repeat the string "\n" that many times and compute its length. # This casts to integer. (GolfScript doesn't officially support Rationals.) }:v~ # Save the above code block in “v” and execute it. ).. # Undo the first three instructions of the loop. { # ;;( # Discard two items from the stack and decrement. .^3$\- # Square and subtract from “n”. r;)8%! # Check if the result satisfies the hypothesis of the three-square theorem. }do # If it doesn't, repeat the loop. -1... # Push 0 (“i”) and undo the first four instructions of the loop. { # ;;;) # Discard two items from the stack and increment “i”. ..252/@252% # Push the digits of “i” in base 252. ^@^@+4$\- # Square both, add and subtract the result v^@- # Take square root, square and compare. }do # If the difference is a perfect square, break the loop. 5\$] # Duplicate the difference an collect the entire stack into an array. {f*}% # Multiply very element of the array by “f”. -4> # Reduce the array to its four last elements (the four numbers). # Convert the result into a string. • I didn't understand j-k-(i/252)-(i%252). From your comments (I cant actually read the code), it looks like you mean j-k-(i/252)^2-(i%252)^2. BTW, the equivalent of j-k-(i/r)^2-(i%r)^2 where r=sqrt(k) may save a few characters (and seems to work without problems even for k=0 in my C program.) – Level River St May 22 '14 at 19:05 • @steveverrill: Yes, I made a mistake. Thank you for noticing. It should be j-k^2-(i/252)^2-(i%252)^2. I'm still waiting for the OP to clarify if 0 is a valid input or not. Your program gives 1414 -nan 6 4.000000 for input 0. – Dennis May 22 '14 at 19:18 • I'm talking about my new program using Legendre's theorem, which I haven't posted yet. It looks like it never calls the code with the % or / when I have the equivalent of k=0, which is why it's not causing problems. You'll see when I post it. Glad you got my old program running. Did you have the memory to build the full 2GB table in rev 1, and how long did it take? – Level River St May 22 '14 at 19:28 • Yeah, the C compiler can behave quite unexpectedly when optimizing. In GolfScript, 0/ => crash! :P I've run your rev 1 on my laptop (i7-4700MQ, 8 GiB RAM). On average, execution time is 18.5 seconds. – Dennis May 22 '14 at 19:33 • Wow is that 18.5 seconds including building the table? It takes over 2 minutes on my machine. I can see the problem is the Windows memory management. Rather than give the program the 2GB it needs straight away, it gives it in small chunks, so it must be doing a lot of unnecessary swapping until the full 2GB is allocated. Actually searching for the answer per user input is much faster, because by then the program doesn't have to go begging for memory. – Level River St May 22 '14 at 19:46 # Rev 1: C,190 a,z,m;short s[15<<26];p(){m=s[a=z-a];printf("%d %f ",m,sqrt(a-m*m));} main(){m=31727;for(a=m*m;--a;s[z<m*m?z:m*m]=a%m)z=a/m*(a/m)+a%m*(a%m);scanf("%d",&z);for(;a*!s[a]||!s[z-a];a++);p();p();} This is even more memory hungry than rev 0. Same principle: build a table with a positive value for all possible sums of 2 squares (and zero for those numbers which are not sums of two squares), then search it. In this rev use an array of short instead of char to store the hits, so I can store the root of one of the pair of squares in the table instead of just a flag. This simplifies function p (for decoding the sum of 2 squares) considerably as there is no need for a loop. Windows has a 2GB limit on arrays. I can get round that with short s[15<<26] which is an array of 1006632960 elements, enough to comply with the spec. Unfortunately, the total program runtime size is still over 2GB and (despite tweaking the OS settings) I have not been able to run over this size (though it is theoretically possible.) The best I can do is short s[14<<26] (939524096 elements.) m*m must be strictly less than this (30651^2=939483801.) Nevertheless, the program runs perfectly and should work on any OS that does not have this restriction. Ungolfed code a,z,m; short s[15<<26]; p(){m=s[a=z-a];printf("%d %f ",m,sqrt(a-m*m));} main(){ m=31727; for(a=m*m;--a;s[z<m*m?z:m*m]=a%m) //assignment to s[] moved inside for() is executed after the following statement. In this rev excessively large values are thrown away to s[m*m]. z=a/m*(a/m)+a%m*(a%m); //split a into high and low half, calculate h^2+l^2. scanf("%d",&z); for(;a*!s[a]||!s[z-a];a++); //loop until s[a] and s[z-a] both contain an entry. s[0] requires special handling as s[0]==0, therefore a* is included to break out of the loop when a=0 and s[z] contains the sum of 2 squares. p(); //print the squares for the first sum of 2 squares p();} //print the squares for the 2nd sum of 2 squares (every time p() is called it does a=z-a so the two sums are exchanged.) # Rev 0 C,219 a,z,i,m;double t;char s[1<<30];p(){for(i=t=.1;(m=t)-t;i++)t=sqrt(a-i*i);printf("%d %f ",i-1,t);} main(){m=1<<15;for(a=m*m;--a;){z=a/m*(a/m)+a%m*(a%m);s[z<m*m?z:0]=1;}scanf("%d",&z);for(;1-s[a]*s[z-a];a++);p();a=z-a;p();} This is a memory hungry beast. It takes a 1GB array, calculates all possible sums of 2 squares and stores a flag for each in the array. Then for the user input z, it searches the array for two sums of 2 squares a and z-a. the function p then reconsitutes the original squares that were used to make the sums of 2 squares a and z-a and prints them, the first of each pair as an integer, the second as a double (if it has to be all integers two more characters are needed,t > m=t.) The program takes a couple of minutes to build the table of sums of squares (I think this is due to memory management issues, I see the memory allocation going up slowly instead of jumping up as one might expect.) However once that is done it produces answers very quickly (if several numbers are to be calculated, the program from scanf onward can be put in a loop. ungolfed code a,z,i,m; double t; char s[1<<30]; //handle numbers 0 up to 1073741823 p(){ for(i=t=.1;(m=t)-t;i++)t=sqrt(a-i*i); //where a contains the sum of 2 squares, search until the roots are found printf("%d %f ",i-1,t);} //and print them. m=t is used to evaluate the integer part of t. main(){ m=1<<15; //max root we need is sqrt(1<<30); for(a=m*m;--a;) //loop m*m-1 down to 1, leave 0 in a {z=a/m*(a/m)+a%m*(a%m);s[z<m*m?z:0]=1;} //split a into high and low half, calculate h^2+l^2. If under m*m, store flag, otherwise throw away flag to s[0] scanf("%d",&z); for(;1-s[a]*s[z-a];a++); //starting at a=0 (see above) loop until flags are found for sum of 2 squares of both (a) and (z-a) p(); //reconsitute and print the squares composing (a) a=z-a; //assign (z-a) to a in order to... p();} //reconsitute and print the squares composing (z-a) Example output The first is per the question. The second was picked as a difficult one to search for. In this case the program has to search as far as 8192^2+8192^2=134217728, but only takes a few seconds once the table is built. 123456789 0 2.000000 3328 10601.000000 805306368 8192 8192.000000 8192 24576.000000 • Shouldn't you add a prototype for sqrt? – edc65 May 21 '14 at 16:14 • @edc65 I'm using GCC compiler (which is for Linux, but I have Cygwin Linux environment installed on my Windows machine.) This means I don't need to put #include <stdio.h> (for scanf/printf) or #include <math.h> (for sqrt.) The compiler links the necessary libraries automatically. I have to thank Dennis for that (he told me on this question codegolf.stackexchange.com/a/26330/15599) Best golfing tip I ever had. – Level River St May 21 '14 at 16:19 • I was already wondering why Hunt the Wumpus appeared in the linked questions. :) By the way, I don't know what GCC uses on Windows, but the GNU linker does not link the math library automatically, with or without the include. To compile on Linux, you need the flag -lm – Dennis May 22 '14 at 4:53 • @Dennis that's interesting, it does include stdio and several other libraries, but not math even with the include? By which I understand if you put the compiler flag, you don't need the include anyway? Well it's working for me, so I'm not complaining, thanks again for the tip. BTW I'm hoping to post a completely different answer taking advantage of Legendre's theorem (but it will still use a sqrt.) – Level River St May 22 '14 at 8:00 • -lm affects the linker, not the compiler. gcc opts to not require the prototypes for functions it "knows", so it works with or without the includes. However, the header filesprovide only function prototypes, not the functions themselves. On Linux (but not Windows, apparently), the math library libm is not part of the standard libraries, so you have to instruct ld to link to it. – Dennis May 22 '14 at 14:35 # Mathematica, 138 chars So it turns out that this produces negative and imaginary results for certain inputs as pointed out by edc65 (e.g., 805306368), so this isn't a valid solution. I'll leave it up for now, and maybe, if I really hate my time, I'll go back and try to fix it. S[n_]:=Module[{a,b,c,d},G=Floor@Sqrt@#&;a=G@n;b:=G[n-a^2];c:=G[n-a^2-b^2];d:=G[n-a^2-b^2-c^2];While[Total[{a,b,c,d}^2]!=n,a-=1];{a,b,c,d}] Or, unsquished: S[n_] := Module[{a, b, c, d}, G = Floor@Sqrt@# &; a = G@n; b := G[n - a^2]; c := G[n - a^2 - b^2]; d := G[n - a^2 - b^2 - c^2]; While[Total[{a, b, c, d}^2] != n, a -= 1]; {a, b, c, d} ] I didn't look too hard at the algorithms, but I expect this is the same idea. I just came up with the obvious solution and tweaked it until it worked. I tested it for all numbers between 1 and one billion and... it works. The test only takes about 100 seconds on my machine. The nice bit about this is that, since b, c, and d are defined with delayed assignments, :=, they don't have to be redefined when a is decremented. This saved a few extra lines I had before. I might golf it further and nest the redundant parts, but here's the first draft. Oh, and you run it as S@123456789 and you can test it with {S@#, Total[(S@#)^2]} & @ 123456789 or # == Total[(S@#)^2]&[123456789]. The exhaustive test is n=0; AbsoluteTiming@ParallelDo[If[e != Total[(S@e)^2], n=e; Abort[]] &, {e, 1, 1000000000}] n I used a Print[] statement before but that slowed it down a lot, even though it never gets called. Go figure. • This is really clean! I'm surprised that it suffices to simply take every value but the the first as large as possible. For golfing, it's probably shorter to save n - a^2 - b^2 - c^2 as a variable and check that d^2 equals it. – xnor May 20 '14 at 23:00 • Does it really work? What solution does it find for input 805306368? – edc65 May 21 '14 at 7:08 • S[805306368]={-28383, 536 I, 32 I, I}. Huh. That does produces 805306368 when you sum it, but obviously there is a problem with this algorithm. I guess I'll have to retract this for now; thanks for pointing that out... – krs013 May 21 '14 at 7:14 • The numbers that fail all seem to be divisible by large powers of 2. Specifically, they seem to be of the form a * 4^(2^k) for k>=2, having extracted out all powers of 4 so that a isn't a multiple of 4 (but could be even). Moreover, each a is either 3 mod 4, or twice such a number. The smallest one is 192. – xnor May 21 '14 at 8:09 main=getLine>>=print.f.read Simple brute force over pre-calculated squares. It needs the -O compilation option (I added 3 chars for this). It takes less than 1 minute for the worst case 999950883. Only tested on GHC. # C: 198 characters I can probably squeeze it down to just over 100 characters. What I like about this solution is the minimal amount of junk, just a plain for-loop, doing what a for-loop should do (which is to be crazy). i,a,b,c,d;main(n){for(scanf("%d",&n);a*a+b*b-n?a|!b?a*a>n|a<b?(--a,b=1):b?++b:++a:(a=b=0,--n,++i):c*c+d*d-i?c|!d?c*c>i|c<d?(--c,d=1):d?++d:++c:(a=b=c=d=0,--n,++i):0;);printf("%d %d %d %d",a,b,c,d);} And heavily prettified: #include <stdio.h> int n, i, a, b, c, d; int main() { for ( scanf("%d", &n); a*a + b*b - n ? a | !b ? a*a > n | a < b ? (--a, b = 1) : b ? ++b : ++a : (a = b = 0, --n, ++i) : c*c + d*d - i ? c | !d ? c*c > i | c < d ? (--c, d = 1) : d ? ++d : ++c : (a = b = c = d = 0, --n, ++i) : 0; ); printf("%d %d %d %d\n", a, b, c, d); return 0; } Edit: It's not fast enough for all input, but I will be back with another solution. I'll let this ternary operation mess stay as of now. # Rev B: C, 179 a,b,c,d,m=1,n,q,r;main(){for(scanf("%d",&n);n%4<1;n/=4)m*=2; for(a=sqrt(n),a-=(3+n-a*a)%4/2;r=n-a*a-b*b-c*c,d=sqrt(r),d*d-r;c=q%256)b=++q>>8; printf("%d %d %d %d",a*m,b*m,c*m,d*m);} Thanks to @Dennis for the improvements. The rest of the answer below is not updated from rev A. # Rev A: C,195 a,b,c,d,n,m,q;double r=.1;main(){scanf("%d",&n);for(m=1;!(n%4);n/=4)m*=2;a=sqrt(n);a-=(3+n-a*a)%4/2; for(;(d=r)-r;q++){b=q>>8;c=q%256;r=sqrt(n-a*a-b*b-c*c);}printf("%d %d %d %d ",a*m,b*m,c*m,d*m);} Much faster than my other answer and with much less memory! This uses http://en.wikipedia.org/wiki/Legendre%27s_three-square_theorem. Any number not of the following form can be expressed as the sum of 3 squares (I call this the prohibited form): 4^a*(8b+7), or equivalently 4^a*(8b-1) Note that all odd square numbers are of the form (8b+1) and all even square numbers are superficially of the form 4b. However this hides the fact that all even square numbers are of the form 4^a*(odd square)==4^a*(8b+1). As a result 2^x-(any square number < 2^(x-1)) for odd xwill always be of the prohibited form. Hence these numbers and their multiples are difficult cases, which is why so many of the programs here divide out powers of 4 as a first step. As stated in @xnor's answer, N-a*a cannot be of the prohibited form for 2 consecutive values of a. Below I present a simplified form of his table. In addition to the fact that after division by 4 N%4 cannot equal 0, note that there are only 2 possible values for (a*a)%4. (a*a)%4= 01 +-- 1|10 N%4= 2|21 <- (N-a*a)%4 3|32 So, we want to avoid values of (N-a*a) that may be of the prohibited form, namely those where (N-a*a)%4 is 3 or 0. As can be seen this cannot occur for the same N with both odd and even (a*a). So, my algorithm works like this: 1. Divide out powers of 4 2. Set a=int(sqrt(N)), the largest possible square 3. If (N-a*a)%4= 0 or 3, decrement a (only once) 4. Search for b and c such that N-a*a-b*b-c*c is a perfect square I particularly like the way I do step 3. I add 3 to N, so that the decrement is required if (3+N-a*a)%4 = 3 or 2. (but not 1 or 0.) Divide this by 2 and the whole job can be done by a fairly simple expression. Ungolfed code Note the single for loop q and use of division/modulo to derive the values of b and c from it. I tried using a as a divisor instead of 256 to save bytes, but sometimes the value of a was not right and the program hung, possibly indefinitely. 256 was the best compromise as I can use >>8 instead of /256 for the division. a,b,c,d,n,m,q;double r=.1; main(){ scanf("%d",&n); for(m=1;!(n%4);n/=4)m*=2; a=sqrt(n); a-=(3+n-a*a)%4/2; for(;(d=r)-r;q++){b=q>>8;c=q%256;r=sqrt(n-a*a-b*b-c*c);} printf("%d %d %d %d ",a*m,b*m,c*m,d*m);} Output An interesting quirk is that if you input a square number, N-(a*a)=0. But the program detects that 0%4=0 and decrements to the next square down. As a result square number inputs are always decomposed into a group of smaller squares unless they are of the form 4^x. 999999999 31621 1 161 294 805306368 16384 0 16384 16384 999950883 31621 1 120 221 1 0 0 0 1 2 1 0 0 1 5 2 0 0 1 9 2 0 1 2 25 4 0 0 3 36 4 0 2 4 49 6 0 2 3 81 8 0 1 4 121 10 1 2 4 • Amazing! 0.003 s for every input! You can get those 5 chars back: 1. Declare m=1 before main. 2. Execute scanf in the for statement. 3. Use float instead of double. 4. n%4<1 is shorter than !(n%4). 5. There's an obsolete space in printf's format string. – Dennis May 22 '14 at 23:19 • A few more suggestions. – Dennis May 23 '14 at 3:26 • Thanks for the tips! n-=a*a doesn't work, because a can be modified afterwards (it gives some wrong answers and hangs on a small number of cases, like 100+7=107.) I included all the rest. It would be nice to something to shorten the printf but I think the only answer is to change the language. The key to speed is to settle on a good value for a quickly. Written in C and with a search space of less than 256^2, this is probably the fastest program here. – Level River St May 23 '14 at 19:31 • Right, sorry. Shortening the printf statement seems difficult without using a macro or an array, which would add bulk elsewhere. Changing languages seems the "easy" way. Your approach would weigh 82 bytes in CJam. – Dennis May 23 '14 at 21:26 # JavaScript - 175 191 176 173 chars Brute force, but fast. Edit Fast but not enough for some nasty input. I had to add a first step of reduction by multiplies of 4. Edit 2 Get rid of function, output inside the loop then force exit contition Edit 3 0 not a valid input v=(p=prompt)();for(m=1;!(v%4);m+=m)v/=4;for(a=-~(q=Math.sqrt)(v);a--;)for(w=v-a*a,b=-~q(w);b--;)for(x=w-b*b,c=-~q(x);c--;)(d=q(x-c*c))==~~d&&p([m*a, m*b, m*c, m*d],a=b=c='') Ungolfed: v = prompt(); for (m = 1; ! (v % 4); m += m) { v /= 4; } for (a = - ~Math.sqrt(v); a--;) /* ~ force to negative integer, changing sign lead to original value + 1 */ { for ( w = v - a*a, b = - ~Math.sqrt(w); b--;) { for ( x = w - b*b, c = - ~Math.sqrt(x); c--;) { (d = Math.sqrt(x-c*c)) == ~~d && prompt([m*a, m*b, m*c, m*d], a=b=c='') /* 0s a,b,c to exit loop */ } } } Example output 123456789 11111,48,10,8 805306368 16384,16384,16384,0
2020-01-24 21:48:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39391592144966125, "perplexity": 2059.1087668988393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250625097.75/warc/CC-MAIN-20200124191133-20200124220133-00301.warc.gz"}
https://zbmath.org/?q=an:0844.60030
# zbMATH — the first resource for mathematics Distributions of Itô processes: Estimates for the density and for conditional expectations of integral functionals. (English. Russian original) Zbl 0844.60030 Theory Probab. Appl. 39, No. 4, 662-670 (1994); translation from Teor. Veroyatn. Primen. 39, No. 4, 825-833 (1994). Let for the finite-dimensional equation $dy(t,\omega) = f(y(t,\omega), t,\omega) dt + \beta(y(t,\omega), t,\omega)dw(t)$ $$y^{a,s} (t)$$ be the solution with the initial condition $$y(s) = a$$, which is independent from $$w(t) - w(s)$$, $$t \geq s$$. The article is devoted to the investigation of the functionals $V(x,s,\omega) = E \Biggl\{ \int^t_s \varphi(y^{x,s}(t,\omega), t, \omega) dt/{\mathcal F}_s \Biggr\},$ where $${\mathcal F}_s = \sigma\{w(\tau) : \tau \leq s\}$$. Estimations for the different functional norms of $$V$$ are obtained. The main instrument is a stochastic parabolic equation for the conditional density of $$y^{a,s}(t)$$ with respect to $${\mathcal F}_t$$. ##### MSC: 60H10 Stochastic ordinary differential equations (aspects of stochastic analysis) 60H15 Stochastic partial differential equations (aspects of stochastic analysis)
2021-01-28 09:13:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4850473403930664, "perplexity": 372.61198702794087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704839214.97/warc/CC-MAIN-20210128071759-20210128101759-00296.warc.gz"}
https://zenodo.org/record/4658929/export/dcite4
Other Open Access # ATLAS Deliverable 4.5: Integrated management considering connectivity patterns Arnaud-Haond, S; Fox, A; Cunha, M; Carlsson, J; Roterman, C ### DataCite XML Export <?xml version='1.0' encoding='utf-8'?> <identifier identifierType="DOI">10.5281/zenodo.4658929</identifier> <creators> <creator> <creatorName>Arnaud-Haond, S</creatorName> <givenName>S</givenName> <familyName>Arnaud-Haond</familyName> </creator> <creator> <creatorName>Fox, A</creatorName> <givenName>A</givenName> <familyName>Fox</familyName> </creator> <creator> <creatorName>Cunha, M</creatorName> <givenName>M</givenName> <familyName>Cunha</familyName> </creator> <creator> <creatorName>Carlsson, J</creatorName> <givenName>J</givenName> <familyName>Carlsson</familyName> </creator> <creator> <creatorName>Roterman, C</creatorName> <givenName>C</givenName> <familyName>Roterman</familyName> </creator> </creators> <titles> <title>ATLAS Deliverable 4.5: Integrated management considering connectivity patterns</title> </titles> <publisher>Zenodo</publisher> <publicationYear>2021</publicationYear> <dates> <date dateType="Issued">2021-04-01</date> </dates> <resourceType resourceTypeGeneral="Other"/> <alternateIdentifiers> <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/4658929</alternateIdentifier> </alternateIdentifiers> <relatedIdentifiers> <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.4658928</relatedIdentifier> <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/atlas</relatedIdentifier> </relatedIdentifiers> <rightsList> <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights> </rightsList> <descriptions> <description descriptionType="Abstract">&lt;p&gt;Connectivity was assessed during ATLAS for a diversity of organisms, from the corals that structure Vulnerable Marine Ecosystems (VMEs) to economically important fishery species using two main pathways. Predicted connectivity patterns were obtained through simulated larval Lagrangian particle modelling, based on oceanographic data gained in WP1 and reproductive knowledge produced in WP4. Realised connectivity was inferred using population genetics on sets of samples gathered before and during ATLAS, focusing on a subset of the target species initially listed, for which enough samples could be gathered to perform comprehensive population genetics analysis.&lt;br&gt; Lagrangian modelling of larval dispersal within ATLAS unravelled the effect of long-term ocean variability (Atlantic Meridional Overturning Circulation - AMOC, subpolar gyre strength - SPG and North Atlantic Oscillation - NAO) and larval behaviour on particle transport pathways and population connectivity (Fox et al., 2016), the contribution of man-made structures to connectivity (Henry et al., 2018) and the application of these results to marine planning and the development of ecologically coherent marine protected area networks. This work has underlined the crucial need for data on reproductive and larval biology to inform these predictions (Fox et al., 2016). This proved to be even more important for deep-sea species due to the vast extent of the water column through which larvae can disperse. Very different outcomes can be expected depending not only on the timing of reproduction or the length of pelagic larval duration (PLD), but also on the behaviour of larvae remaining on the seafloor or migrating more or less along the water column. The relationship between PLD and &amp;ldquo;realised connectivity&amp;rdquo; as estimated through population genetics is far from easily predictable, despite some relationship existing (Riginos et al., 2011). This is likely to be worse in the deep sea as exemplified by recent models where extensive PLD resulted in extreme variance of predicted connectivity (Ross et al., 2019), possibly due to the importance of the third dimension (depth) in the space potentially explored by larvae. Nevertheless, the new method developed in ATLAS (Fox et al., 2019) allows a generic approach to optimise multi objectives in the design of MPAs. This showed that for highly dispersive behaviours, all the Northern Atlantic could in theory be connected with a favoured anti-clockwise dispersal along the slopes. Results also underlined that seamount populations may act as crucial stepping stones (hubs) in the broad scale connectivity, placing them in the priority list to maintain connectivity for a broad range of species. This important role of seamounts and offshore banks was also demonstrated through Lagrangian modelling based on the reef coral Lophelia pertusa&amp;rsquo;s reproductive and larval biology (Fox et al., 2016).&amp;nbsp;As for inferences of &amp;ldquo;realised&amp;rdquo; connectivity, population genetics and genomics allow identification of distinct management units (MUs; Palsb&amp;oslash;ll et al., 2007), i.e. populations of conspecific individuals among which the degree of connectivity is sufficiently low so that each population should be monitored and managed separately, for example along the Northeast Atlantic coasts and the Mediterranean where the majority of samples analysed within ATLAS framework could be gathered. These samples laid also the foundations for a basin-scale analysis in the coming years in collaboration with partners from the northwest Atlantic under the leadership of the EU-funded project iAtlantic (see below). Importantly, genetically differentiated populations are not only demographically independent but may also shelter singular genetic diversity, one of the three components of biodiversity in need for conservation but too long neglected by management and conservation plans (Laikre et al., 2010). This was true for VMEs species such as Madrepora oculata, but also the commensal polychaete Eunice norvegica where at least one cryptic species was identified in the Atlantic. As for Lophelia pertusa, homogeneity was found in the Bay of Biscay despite some hints of differentiation of SE Rockall bank (Boavida et al., 2019b). The occurrence of those distinct MUs, or even distinct evolutionary significant units (ESUs; Ryder, 1986) in the case of Eunice sp., is essential for conservation, for each of them should be treated as distinct diversity entities, with no demographic (Brown Kodric-Brown, 1977) interdependence. This also means in case one MU would collapse, no evolutionary (Orr Unckless, 2014; Tomasini Peischl) rescue effect can be expected from the others, which needs to be accounted for in monitoring and management plans. Fish species studied in ATLAS were chosen among the target listed at the origin of the project for both their economic interest and, likewise invertebrates, the availability of samples to allow assessing connectivity over broad scales with a sufficient number of samples. Distinct MUs were also detected in the boarfish Capros aper, the horse mackerel Trachurus trachurus, and the Norway lobster Nephrops norvegicus. These MUs are demographically independent populations, thus multiple stocks expected to respond independently to harvesting and management. While the MUs in the boarfish largely agreed with the areas defined by the International Council for the Exploration of the Sea (ICES) (one exception though being noticed in the southern border), uncertainties remain for the horse mackerel and clear mismatches were revealed between MUs defined with genetic data and management areas for the Norway lobster, calling for a revision of management plans.&lt;br&gt; In this report, we also develop detailed explanations of the difference between genetic and demographic independency that are essential to understand the power and limitation of population genomics, but also to account for connectivity data in management plans. We believe those explanations are essential to share with managers and stakeholders, as well as scientific colleagues&amp;nbsp;expert in fields other than population genetics who are interested in applying population genetics to management and conservation.&lt;br&gt; On the basis of the results obtained in ATLAS, guidelines could be provided for future management plans, whether through the identification of mismatch between fisheries management units and the genetic differentiation of stocks, or the identification of genetically specific and disconnected populations for benthic organisms characterising VMEs. In fact, nearly every species showed a singular spatial delineation of MUs, resulting in a mosaic of patterns illustrating the challenge of multispecies purpose MPAs. One result is to account for the most limited connectivity potential in management plans, to ensure the maintenance of exchanges. In fact accounting for very limited dispersal to include connectivity in spatial planning showed the need to design large areas and to favour contiguous prioritisation units for conservation (Combes et al., in prep.).&lt;br&gt; Remaining uncertainties in areas where no genetic differentiation was detected is also important to consider and is different among taxa. Compared to those species for which clear MUs (or even ESUs) could be recognised, there were species and areas where no genetic differentiation could be detected (such as Lophelia pertusa in the Bay of Biscay), or no signature of bottleneck could be encountered (as was the case for most populations studied in ATLAS), despite extensive referenced exploitation or habitat destruction. In such cases it is very difficult to disentangle the real absence of barrier to gene flow and/or bottleneck from the insufficient power of the molecular method used. As demonstrated recently through simulations (Bailleul et al., 2018), there is a time lag between the moment barriers to connectivity or bottleneck occur and their signature can be detected through population genetics. This was designed as the &amp;ldquo;grey zone effect&amp;rdquo; and its duration depends on the statistical power delivered by the set of genetic markers used, but can encompass several tens to a thousand years. New generation high density genome scan analysis can help increasing the statistical power to detect such events. However, these methods are very demanding in terms of DNA quality and not all collections examined in ATLAS, particularly the older ones, gave such high quality DNA. Much work was thus dedicated during ATLAS to resolving DNA extraction protocols so that important existing deep-sea sample collections could be used. First results obtained on the two reef framework-forming corals and their associated commensal polychaete (Eunice spp., for we now know it encompasses at least two species), as well as the coral Dendrophyllia cornigera. For the last two species some samples liberated high quality DNA to build libraries that are being produced, and will allow to inferring our ability to detect hitherto ignored disruption of connectivity or bottlenecks. These data will be completed, analysed and interpreted beyond ATLAS, in the framework of iAtlantic using lessons learnt from genomic issues met and circumvented during ATLAS.&amp;nbsp;Due to issues related to DNA quality, RADSeq analysis on a dozen species for which just a handful of specimens met DNA quality standards allows the provision of genomic resources to be used with protocols requiring a lower DNA quality standard. These new resources will allow optimisation of the use of old but precious specimens and DNA collections of deep-sea organisms. Along with the basin scale analysis forecast for the two main reef framework-forming corals taxa in collaboration with US partners, those are important perspectives of development beyond ATLAS, that are planned to emerge during the iAtlantic project.&amp;nbsp;&lt;/p&gt;</description> </descriptions> <fundingReferences> <fundingReference> <funderName>European Commission</funderName> <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier> <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/678760/">678760</awardNumber> <awardTitle>A Trans-AtLantic Assessment and deep-water ecosystem-based Spatial management plan for Europe</awardTitle> </fundingReference> </fundingReferences> </resource> 9 7 views
2021-12-08 16:53:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6006079912185669, "perplexity": 4960.009285112838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363515.28/warc/CC-MAIN-20211208144647-20211208174647-00020.warc.gz"}
http://montvillechamber.org/mytek-brooklyn-prgjb/0ef36b-ssc-trigonometry-questions-with-solutions-pdf
The first one is a person in charge of a school or...Credit Crunch A credit crunch, credit crisis, or credit squeeze occurs when the general availability of credit declines considerably. JOIN CLASSROOM PROGRAM. Some more peoblems. You can also see. We advised you that mark those frequently repeated questions and also concentrate more on these questions. Here is the top 20 SSC CGL Trigonometry questiopns with solutions. 124) Solution: i. Contents1 Andhra Pradesh SSC Class 10 Solutions For Maths – Trigonometry (English Medium) 1.1 Exercise 11.1:1.2 Exercise 11.2:1.3 Exercise 11.3:1.4 Exercise 11.4: Andhra Pradesh SSC Class 10 Solutions For Maths – Trigonometry (English Medium) These Solutions are part of AP SSC Class 10 Solutions for Maths. CRACK SSC CGL 2017. SSC CGL Previous Year Questions Paper Yearwise (2018 to 2010) In the below section on this page, we have provided the SSC CGL Previous Year Questions Papers PDF Links. Exercise 11.4. In this post, we are providing you SSC JE Previous Year Question Papers PDF for the upcoming Junior Engineer exam.Previous Year Plays a Vital Role in every examination. If you are preparing for SSC JE Exam 2020 so this blog post will very beneficial for you. NCERT Solutions For Class 11 Maths Chapter 3 Trigonometric Functions are available at BYJU’S, which are prepared by our expert teachers. Typing Speed Tests. Junior Engineer (JE) Stenographer. Hi Friends, Here exam Tyaari brings you Practice Exercise of math (Textbook pg, no. Read Online Trigonometric Identities Questions And Solutions using trigonometric identities to simplify expressions. SSC Topic Tests. On this page you can read or download grade 12 trigonometry questions and answers in PDF format. Download Free Trigonometry Questions And Solutions with best solution regarding Trigonometry. In this article, we have provided Andhra Pradesh SSC Class 10 […] Practice Trigonometry Questions and Answers for Competitive Exams PDF for SSC Exams like SSC CGL Tier 1, CGL Tier 2, SSC CHSL, CDSE, MTS BYJU’S provides step by step solutions by considering the different understanding levels of students and the marking scheme. Find x and H in the right triangle below. Optional exercise . Class X maths trigonometry solutions. BUY ONLINE TEST SERIES. Click here to Download Aptitude Shortcuts and Examples for Trigonometry Problems Fill in the blanks with reference to the figure given below. Maybe you have knowledge that, people have look numerous times for their favorite readings like this trigonometry questions and solutions, but end up in malicious downloads. Problems. Where To Download Trigonometry Questions And Solutions Trigonometry Questions And Solutions Thank you for downloading trigonometry questions and solutions. $$\frac{\sin \theta}{\cos \theta}$$ = [tan θ] Read PDF Trigonometry Questions And Solutions Trigonometric Problems (solutions, examples, games, videos) Trigonometry Plays a vital role in Advance Maths & Quantitative Aptitude Section. ” SSC CGL 2017 Maths Solved Paper“ Algebra Plays a vital role in Advance Maths and Quantitative Aptitude Section. In every exam you will get atleast 3-4 questions from this topic. 611.69 KB 2672 Downloads Get Trigonometry Questions for RRB NTPC/Group D. Check Trigonometry exercises pdf for rrb group d exams, Trigonometry practice pdf for rrb NTPC exams. This angle is called the angle of elevation of the object.If the object is below the horizontal from the eye, then we have no turn our head downwards no view the object. Pdf format is the top 20 SSC CGL 2016 from Career Power Classroom Programs 2400+ candidates were selected SSC. Go to the figure given below and questions ssc trigonometry questions with solutions pdf best solution regarding Algebra difficult for the aspirants to the... Of students and the marking scheme go to the books initiation as without difficulty as search them... Download Free Topic-wise and Mock Test PDF for board exams if its area is.... Reference to the books initiation as without difficulty as search for them the different understanding levels students! Search for them SSC CGL Trigonometry questiopns with solutions for you PDF to get perfect with! With best solution regarding Algebra Career Power Classroom Programs Free PDF for SSC Trigonometry... Difficulty as search for them while preparing for board exams is 400 Trigonometric Functions are at! Trigonometry problems and questions with solution Free PDF for SSC JE exam so! Free Trigonometry questions and solutions using Trigonometric Identities questions and answers in PDF format are! Improve application skills while preparing for SSC CGL TIER 2 2017 the blanks with reference to books. In your own methods best solution regarding Trigonometry these solutions are written as the. This page you can read or download Grade 12 Trigonometry questions and answers in PDF format Online... Practicing CGL exam question Paper, some questions are repeated PDF format guidelines of CBSE step step... Trigonometry questions and Activities preparing for SSC CGL 2016 from Career Power Classroom Programs so.: Observe the solutions and try them in your own methods BYJU ’ ssc trigonometry questions with solutions pdf provides by... Blanks with reference to the books initiation as without difficulty as search for them perfect with! 3-4 questions from this topic and download this Algebra PDF to get perfect questions with answers solutions... Questions with best solution regarding Trigonometry the latest guidelines of CBSE BYJU ’ S provides step by step solutions considering... Spend to go to the books initiation as without difficulty as search for them the solutions and them! Guidelines of CBSE your own methods without solving previous year papers it becomes very difficult for the to. 3 Trigonometric Functions are available at BYJU ’ S, which are prepared by our expert teachers H in blanks! Post will very beneficial for you CGL 2016 from Career Power Classroom Programs are available at BYJU S! 3 Trigonometric Functions are available at BYJU ’ S provides step by step solutions by considering the different levels... In PDF format will very beneficial for you and try them in your own.! Post will very beneficial for you top 20 SSC CGL TIER 2.... Improve application skills while preparing for SSC JE exam 2020 so this blog post will very beneficial you. Also concentrate more on these questions simplify expressions ’ S, which prepared.... download Free Trigonometry questions and solutions with best solution regarding Trigonometry in... Board exams solutions by considering the different understanding levels of students and the scheme. And also concentrate more on these questions Trigonometry Intext questions and solutions are.. A railway frequently repeated questions and answers in PDF format a railway board! Very beneficial for you right triangle below if its area is 400 is the top 20 SSC CGL from. Questions and answers in PDF format repeated questions and also concentrate more on these questions to. Trigonometry questiopns with solutions you are preparing for board exams the latest guidelines of CBSE and answers PDF. Career Power Classroom Programs not require more become old to spend to go to the initiation! Topic-Wise and Mock Test PDF top 20 SSC CGL Trigonometry questiopns with solutions on... As search for them will very beneficial for you read Online Trigonometric Identities to simplify expressions from Career Classroom! From Career Power Classroom Programs solution Free PDF for SSC CGL TIER 2 2017 with for! About any question and improve application skills while preparing for SSC JE exam 2020 so this blog will! Difficult for the aspirants to crack the exam like a railway become old to spend to go the! Provides step by step solutions by considering the different understanding levels of students and the marking scheme 2020 so blog... Not require more become old to spend to go to the figure given below will very beneficial for.! With solutions get atleast 3-4 questions from this topic Trigonometry with Notes SSC. The lengths of all sides of the right triangle below dear students, download. Ssc CGL Trigonometry questiopns with solutions the lengths of all sides of the triangle... Dear students,... download Free Topic-wise and Mock Test PDF require more become old to to... Exam like a railway atleast 2-3 questions from this topic to get questions. Page you can read or download Grade 12 Trigonometry questions and also concentrate on... Solving previous year papers it becomes very difficult for the aspirants to crack the exam like a.... Identities to simplify expressions questions on Trigonometry with Notes for SSC JE exam 2020 so this blog will! The exam like a railway in PDF format area is 400 latest guidelines of.! On this page you can read or download Grade 12 Trigonometry questions and also concentrate more on these.! Cgl Trigonometry questiopns with solutions solutions by considering the different understanding levels of students and marking. Algebra PDF to get perfect questions with solution Free PDF for SSC... Grade 10 Trigonometry problems ssc trigonometry questions with solutions pdf questions best... Exam you will get atleast 3-4 questions from this topic Observe the solutions and try them in your own.... For the aspirants to crack the exam like a railway, some questions are repeated understanding of. Without difficulty as search for them Identities questions and also concentrate more on these questions also concentrate more these... Prepared by our expert teachers become old to spend to go to the books initiation as without difficulty search!, which are prepared by our expert teachers... download Free Topic-wise and Mock PDF. These questions advised you that mark those frequently repeated questions and answers in format... Read Online Trigonometric Identities questions and also concentrate more on these questions is the top 20 SSC Trigonometry! The ssc trigonometry questions with solutions pdf 20 SSC CGL TIER 2 2017 it becomes very difficult for the aspirants to crack exam! Step solutions by considering the different understanding levels of students and the scheme. The right triangle below if its area is 400 for you on this topic while ssc trigonometry questions with solutions pdf for board exams the. And questions with answers and solutions using Trigonometric Identities to simplify expressions the solutions and try them in own. Are repeated and download this Algebra PDF to get perfect questions with solution Free PDF for SSC Grade! Lengths of all sides of the right triangle below available at BYJU ’ S, are... The solutions and try them in your own methods ’ S provides step by step solutions by the! At BYJU ’ S provides step by step solutions by considering the different levels. Question Paper, some questions are repeated and questions with solution Free PDF for CGL. Are prepared by our expert teachers regarding Algebra SSC CGL 2016 from Career Power Classroom Programs solutions! 3 Trigonometric Functions are available at BYJU ’ S provides step by step solutions by considering the understanding. 2 2017 with solution Free PDF for SSC CGL TIER 2 2017 more on these questions initiation without! Can read or download Grade ssc trigonometry questions with solutions pdf Trigonometry questions and also concentrate more on these questions marking scheme the blanks reference... In the right triangle below if its area is 400 are prepared our. Best solution regarding Algebra 10 Trigonometry problems and questions with solution Free PDF for CGL. To spend to go to the figure given below 10 Maths Chapter 6 Trigonometry Intext questions also... Improve application skills while preparing for board exams its area is 400 are preparing for exams. This topic and download this Algebra PDF to get perfect questions with answers and solutions are as! Sides of the right triangle below to get perfect questions with solution Free PDF for SSC Grade! Some questions are repeated own methods on Trigonometry with Notes for SSC JE exam 2020 so this post! You are preparing for SSC CGL Trigonometry questiopns with solutions questions and also concentrate more on these questions x! Very difficult for the aspirants to crack the exam like a railway CGL. Must focus on this page you can read or download Grade 12 Trigonometry questions answers... Must focus on this page you can read or download Grade 12 Trigonometry questions and with... Will very beneficial for you Career Power Classroom Programs the blanks with reference the. And try them in your own methods the top 20 SSC CGL TIER 2.... With Notes for SSC... Grade 10 Trigonometry problems and questions with answers and solutions are written as per latest. Career Power Classroom Programs area is 400 application skills while preparing for CGL. Byju ’ S, which are prepared by our expert teachers focus on topic. Which are prepared by our expert teachers questions on Trigonometry with Notes for SSC... 10... As search for them exam question Paper, some questions are repeated are by! Students and the marking scheme to crack the exam like a railway 2... Also concentrate more on these questions and improve application skills while preparing for board exams students. Algebra PDF to get perfect questions with answers and solutions with best solution regarding.... 6 Trigonometry Intext questions and answers in PDF format concentrate more on these.. Pdf format that mark those frequently repeated questions and also concentrate more on these questions regarding Algebra per latest! Chapter 6 Trigonometry Intext questions and solutions are written as per the latest guidelines of CBSE Free Trigonometry questions solutions! Own methods might not require more become old to spend to go to the books initiation as without difficulty search...
2021-04-14 17:53:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32282787561416626, "perplexity": 3941.029287450202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077843.17/warc/CC-MAIN-20210414155517-20210414185517-00563.warc.gz"}
https://cs.stackexchange.com/questions/120982/what-are-some-uses-of-the-thue-morse-sequence-in-computer-science
# What are some uses of the Thue-Morse sequence in computer science? Note: I come from a mathematics background. The Thue-Morse sequence $$t_n$$ is a binary sequence that takes the value $$0$$ at the positive integer $$n$$ if the number of $$1$$s in its binary expansion is even, $$1$$ otherwise. A definition that is closer to computer science states that $$t_n$$ is the binary sequence obtained by starting with $$0$$ and successively appending the boolean complement of the sequence obtained so far. Thus, $$t_n$$ begins $$0,1,1,0,1,0,0,1,\ldots$$ This sequence is of much interest in mathematics, but so far I have not come across any applications in computer science. This surprises me, for the two following reasons: • The Thue-Morse sequence is automatic, i.e., the sequence is fully characterized by a finite automaton, • It is a binary sequence. What theoretic or practical applications of the Thue-Morse sequence are there in computer science? • Regarding your comment here, the answer is actually "no". See my profile. – user114441 Mar 15 '20 at 15:13 The problem of computing $$t_N$$ given $$N$$ is famous in theoretical computer science as the PARITY problem and is a well-known example of a decision language that can be recognized by linear-size $$O(\log n)$$-depth circuits but cannot be recognized by polynomial-size circuits of constant depth; here $$n$$ is the length of the binary expansion of $$N$$. Parity computation finds application in error-correction codes, to calculate a check bit to append to data (example). It is also useful in multiplying 0/1 matrices in the binary field GF(2) (where the add operation is $$\oplus$$ and the multiply operation is $$\wedge$$), since the inner product of, for example, $$(a_1, a_2, a_3, a_4)$$ with $$(b_1, b_2, b_3, b_4)$$ is $$(a_1 \wedge b_1) \oplus (a_2 \wedge b_2) \oplus (a_3 \wedge b_3) \oplus (a_4 \wedge b_4),$$ which is just the parity of the bit-by-bit conjunction $$a \wedge b$$. I don't know if this counts as an application but at least it shows up. When using a polynomial rolling hash, it's tempting to do it modulo $$2^{32}$$ or $$2^{64}$$ (depending on the word size of the computer) since on most modern architectures addition and multiplication of integers just handle overflows this way, saving time. This method will fail often on Thue-Morse-like strings (like ABBABAAB...), as explained here. I remember not understanding the factorization of $$T$$ a few years ago, the key is the recurrence relation $$t_{2n} = t_n$$, $$t_{2n+1} = 1-t_n$$ Alternative explanation of the factorization: from the "append the negated sequence" definition one can directly see $$\Pi_n (1-x^{2^n})$$ , since each each term does exactly that. • That's an interesting example. – Klangen Feb 25 '20 at 8:16 This is not an answer, but too long for a comment. so far I have not come across any applications in computer science. This surprises me, for the two following reasons: • The Thue-Morse sequence is automatic, i.e., the sequence is fully characterized by a finite automaton, • It is a binary sequence. You may want to refine your criteria for what makes something applicable in computer science. These two properties don't seem like very good criteria: • The fact that this sequence is automatic is true and does suggest studying it in the domain of computer science, but it's a very weak statement; most sequences (almost all that I am aware of) are computable. Also, this sequence is not regular, so your statement that it is "characterized by a finite automaton" seems misleading. • The fact that the sequence is binary means nothing, I don't see that as relevant to its applicability.
2021-04-10 11:09:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8699111342430115, "perplexity": 360.53810066012653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038056869.3/warc/CC-MAIN-20210410105831-20210410135831-00368.warc.gz"}
https://chemistry.stackexchange.com/questions/54901/how-to-name-this-benzene-alcohol
# How to name this benzene alcohol? The following is the molecule I am confused about. Earlier in my textbook it gives an example of how to name benzenes with 2 or more hydroxyl groups: My textbook (Nelson 12 Chemistry) is infamous for its crappiness but my school is too lazy to change it, so I've been asking a few of these questions in recent weeks. 1. What is the proper, IUPAC name of the first molecule depicted in this question? 2. Are there any other commonly accepted names for this molecule and under what convention are they named? Personally, I would name the first molecule benzene-1,2,4-triol, but the textbook gives the name 1,2,4-trihydroxybenzene, using the hydroxyl groups as substituents. The compounds given in the question contain the characteristic group $(\ce{-OH})$. Since there is only one characteristic group, the seniority order of classes is not relevant in this case; thus, the $\ce{-OH}$ substituent corresponds to the principal characteristic group that is expressed as a suffix (‘ol’). Therefore, the preferred IUPAC name (PIN) for compound (e) is benzene-1,2,4-triol. Various traditional names exist for hydroxy compounds. According to the current version of Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book), only the name ‘phenol’ is retained as a PIN. P-63.1.1.1 Only one name is retained, phenol, for $\ce{C6H5-OH}$, both as a preferred name and for general nomenclature. The structure is substitutable at any position. Locants 2, 3, and 4 are recommended, not o, m, and p. Therefore, the PIN for compound (c) is indeed phenol. Furthermore, according to Subsection P-63.1.1.2, the names • pyrocatechol (benzene-1,2-diol), • resorcinol (benzene-1,3-diol), and • hydroquinone (benzene-1,4-diol) are retained but only for general nomenclature and only when unsubstituted. Therefore, the PIN for compound (d) is the systematic name benzene-1,2-diol (the traditional name ‘pyrocatechol’ may be used in general nomenclature). Loong gives an excellent overview of IUPAC approved names for your molecule. (He always does.) However you should also be aware that the common name for this compound is hydroxyquinol, which is still in common use. The other two isomers of benzenetriol also have non-IUPAC-approved common names which are also in wide use. They are phloroglucinol for benzene-1,3,5-triol and pyrogallol for benzene-1,2,3-triol. A look at the Google N-grams viewer for these terms shows that pyrogallol and phloroglucinol are used far, far more often than the word "benzenetriol", while hydroxyquinol is used at about the same frequency.
2021-09-25 05:56:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3161470293998718, "perplexity": 2615.050921239627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057598.98/warc/CC-MAIN-20210925052020-20210925082020-00365.warc.gz"}
https://homework.cpm.org/category/ACC/textbook/gb8i/chapter/cc43/lesson/cc43.2.1/problem/3-77
### Home > GB8I > Chapter cc43 > Lesson cc43.2.1 > Problem3-77 3-77. For each of the polygons formed by algebra tiles below: Homework Help ✎ • Sketch and label the shape on your paper and write an expression that represents the perimeter. • Simplify your perimeter expression as much as possible. • Start by labeling the sides. $x+1+y+1+x+1+x+1+y+1+x+1$ Combine like terms. $4x+2y+6$ • See the help for part (a). $2x+4$ • $2x+4y+6$ • See the help for part (a). $2x+2y+6$
2019-10-19 17:45:54
{"extraction_info": {"found_math": true, "script_math_tex": 5, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49698981642723083, "perplexity": 6676.392756731505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986697439.41/warc/CC-MAIN-20191019164943-20191019192443-00552.warc.gz"}
https://labs.tib.eu/arxiv/?author=A.%20Gromliuk
• ### Twin GEM-TPC Prototype (HGB4) Beam Test at GSI and Jyv\"askyl\"a - a Development for the Super-FRS at FAIR(1711.08158) Nov. 22, 2017 physics.ins-det The FAIR[1] facility is an international accelerator centre for research with ion and antiproton beams. It is being built at Darmstadt, Germany as an extension to the current GSI research institute. One major part of the facility will be the Super-FRS[2] separator, which will be include in phase one of the project construction. The NUSTAR experiments will benefit from the Super-FRS, which will deliver an unprecedented range of radioactive ion beams (RIB). These experiments will use beams of different energies and characteristics in three different branches; the high-energy which utilizes the RIB at relativistic energies 300-1500 MeV/u as created in the production process, the low-energy branch aims to use beams in the range of 0-150 MeV/u whereas the ring branch will cool and store beams in the NESR ring. The main tasks for the Super-FRS beam diagnostics chambers will be for the set up and adjustment of the separator as well as to provide tracking and event-by-event particle identification. The Helsinki Institute of Physics, and the Detector Laboratory and Experimental Electronics at GSI are in a joint R&D of a GEM-TPC detector which could satisfy the requirements of such tracking detectors, in terms of tracking efficiency, space resolution, count rate capability and momenta resolution. The current prototype, which is the generation four of this type, is two GEM-TPCs in twin configuration inside the same vessel. This means that one of the GEM-TPC is flipped on the middle plane w.r.t. the other one. This chamber was tested at Jyv\"askyl\"a accelerator with protons projectiles and at GSI with Uranium, fragments and Carbon beams during this year 2016. • ### Twin GEM-TPC Prototype (HGB4) Beam Test at GSI - a Development for the Super-FRS at FAIR(1612.05488) Dec. 16, 2016 physics.ins-det The GEM-TPC detector will be part of the standard Super-FRS detection system, as tracker detectors at several focal stations along the separator and its three branches. • The exclusive charmonium production process in $\bar{p}p$ annihilation with an associated $\pi^0$ meson $\bar{p}p\to J/\psi\pi^0$ is studied in the framework of QCD collinear factorization. The feasibility of measuring this reaction through the $J/\psi\to e^+e^-$ decay channel with the PANDA (AntiProton ANnihilation at DArmstadt) experiment is investigated. Simulations on signal reconstruction efficiency as well as the background rejection from various sources including the $\bar{p}p\to\pi^+\pi^-\pi^0$ and $\bar{p}p\to J/\psi\pi^0\pi^0$ reactions are performed with PandaRoot, the simulation and analysis software framework of the PANDA experiment. It is shown that the measurement can be done at PANDA with significant constraining power under the assumption of an integrated luminosity attainable in four to five months of data taking at the maximum design luminosity. • Simulation results for future measurements of electromagnetic proton form factors at \PANDA (FAIR) within the PandaRoot software framework are reported. The statistical precision with which the proton form factors can be determined is estimated. The signal channel $\bar p p \to e^+ e^-$ is studied on the basis of two different but consistent procedures. The suppression of the main background channel, $\textit{i.e.}$ $\bar p p \to \pi^+ \pi^-$, is studied. Furthermore, the background versus signal efficiency, statistical and systematical uncertainties on the extracted proton form factors are evaluated using two different procedures. The results are consistent with those of a previous simulation study using an older, simplified framework. However, a slightly better precision is achieved in the PandaRoot study in a large range of momentum transfer, assuming the nominal beam conditions and detector performance.
2019-12-11 17:41:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6197214126586914, "perplexity": 2023.6331471159262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540531974.7/warc/CC-MAIN-20191211160056-20191211184056-00526.warc.gz"}
https://datascience.stackexchange.com/questions/11543/dummy-coding-a-column-in-r-with-multiple-levels
# Dummy coding a column in R with multiple levels I have a dependent variable measuring the net revenue. One of the major predictor affecting this is "product" i.e. the product sold to the customer. My randomly sampled dataset contains 1.4 million entries. Products are assigned a specific categorical value. I feel that using dummy variables to represent the products would be apt however, there are 4481 levels of products. I do not know how to code so many levels in R. model.matrix(~ product, data=salesdata) returns an error. (Needs 38.4GB of memory) Can someone guide me a little on how to code these categorical variables? Dependent: Net revenue (quantitative) Independent: Product code (quantitative but treated as qualitative since values are nominal) ## 2 Answers You can use either sparse matrices or feature hashing. Sparse Matrix I suppose that using a sparse matrix is the only choice. I suspect that this line of code will work. This uses the Matrix package. sparseProducts <- sparse.model.matrix(~ product, data=salesdata) Take my example: sparseDiagonalMatrix <- sparse.model.matrix(~., data.frame(V1 = as.factor(seq(1, 10)))) each column represents a different factor, this will yield: 1 1 . . . . . . . . . 2 1 1 . . . . . . . . 3 1 . 1 . . . . . . . 4 1 . . 1 . . . . . . 5 1 . . . 1 . . . . . 6 1 . . . . 1 . . . . 7 1 . . . . . 1 . . . 8 1 . . . . . . 1 . . 9 1 . . . . . . . 1 . 10 1 . . . . . . . . 1 > class(sparseDiagonalMatrix) [1] "dgCMatrix" attr(,"package") [1] "Matrix" alternatively you can remove the intercept and have all zeros represent class 1 sparseDiagonalMatrix <- sparse.model.matrix(~., data.frame(V1 = as.factor(seq(1, 10))))[, -1, drop=FALSE] 10 x 9 sparse Matrix of class "dgCMatrix" V12 V13 V14 V15 V16 V17 V18 V19 V110 1 . . . . . . . . . 2 1 . . . . . . . . 3 . 1 . . . . . . . 4 . . 1 . . . . . . 5 . . . 1 . . . . . 6 . . . . 1 . . . . 7 . . . . . 1 . . . 8 . . . . . . 1 . . 9 . . . . . . . 1 . 10 . . . . . . . . 1 > class(sparseDiagonalMatrix) [1] "dgCMatrix" attr(,"package") [1] "Matrix" You will need a package that supports sparse matrices for measuring the net revenue though. Fortunately, most modern mainstream packages support sparse matrices. Feature Hashing Here is a great explanation of feature hashing in R (among other techniques) which is also an alternative, specially useful when you have hundreds of thousands or millions of multiple levels. https://amunategui.github.io/feature-hashing/ • I will try your method and let you know. The link that you have provided is really helpful! Alternatively, can I use multinomial logit regression? It seems to support categorical as well as quantitative variables – Anonymint May 3 '16 at 5:29 • You have alternatives there like SparseM, a linear kernel svm from e1071 (equivalent to logistic regression) or MatrixModel (See ?MatrixModels:::lm.fit.sparse) – wacax May 3 '16 at 17:29 • by the way, xgboost also supports sparse matrices. – wacax Sep 16 '16 at 17:00 For the most part, models built in R (for example, linear regression using lm ) can handle the categorical data coded as factor and do not need any dummy coding. You just need to do this before passing data to lm: salesdata$product <- factor(salesdata$product) So, depending on the model you are about to build, you might not need to create dummy variables.
2021-01-18 20:29:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8097649216651917, "perplexity": 439.63598919715497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703515235.25/warc/CC-MAIN-20210118185230-20210118215230-00265.warc.gz"}
https://www.physicsforums.com/threads/a-question-on-calculus-of-variations.720374/
# A question on calculus of variations 1. Nov 2, 2013 ### nenyan 1. The problem statement, all variables and given/known data δ (∂x'^μ/∂x^β)=0 This equation is on my textbook. I don't quite understand. Where x'^μ is coordinate component. 2. Relevant equations 3. The attempt at a solution 2. Nov 2, 2013 ### vanhees71 Is the transformation from $x$ to $x'$ given or what are your symbols supposed to mean? 3. Nov 2, 2013 ### nenyan Yes. It's the transformation from $x$ to $x'$. δ is Variational symbol. 4. Nov 2, 2013 ### vanhees71 Ok, you must define the meaning of $x'$ or give the complete variational problem. I don't know, what's meant by this symbol!
2018-03-24 00:50:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5771216750144958, "perplexity": 2733.529753769225}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649508.48/warc/CC-MAIN-20180323235620-20180324015620-00796.warc.gz"}
http://math.stackexchange.com/questions/258891/is-the-set-1-emptyset-a-subset-of-1/258896
# Is the set $\{\{1\},\emptyset\}$ a subset of $\{\{1\}\}$? Let $A = \{\{1\},\emptyset\}$, $B=\{\{1\}\}$. Is it true that $A\subset B$? - add comment ## 3 Answers Note that $\emptyset \in A$ but $\emptyset \notin B$. However, note that $\{1\} \in B$ and $\{1\} \in A$. Hence, for all $x \in B$, we have that $x \in A$. Hence, in fact, $B \subset A$. - add comment No, because $\varnothing \in A$, but $\varnothing\notin B$. - add comment False. For $A \subset B$ we need $(\forall x \in A)x\in B$. But $(\exists x\in A)x \notin B$, namely $x = \emptyset$. This is the logical negation, so $A \not\subset B$. - add comment
2013-12-07 02:10:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9941136240959167, "perplexity": 362.70108506502703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052995/warc/CC-MAIN-20131204131732-00087-ip-10-33-133-15.ec2.internal.warc.gz"}
https://datasciencelk.com/category/probability/
# Probability ## Poisson Distribution Explained Poisson Distribution outputs the probability of a sequence of events happening in a fixed time interval. ## Uniform Probability Distribution In a Uniform Distribution Probability Density Function (PDF) is same for all the possible X values. Sometimes this is called a Rectangular Distribution. There are two (2) parameters in this distribution, a minimum (A) and a maximum (B) ## Negative Binomial Distribution In the Negative Binomial Distribution, we are interested in the number of Failures in n number of trials. This is why the prefix “Negative” is there. When we are interested only in finding number of trials that is required for a single success, we called it a Geometric Distribution. ## Binomial Probability Distribution Binomial Distribution is used to find probabilities related to Dichotomous Population. It can be applied to a Binomial Experiment where it can result in only two outcomes. Success or Failure. In Binomial Experiments, we are interested in the number of Successes. ## Probability Mass Function Probability Mass Function (PMF) of X says how the total probability of 1 is distributed (allocated to) among the various possible X values. ## Expected Value of a Random Variable Expected Value is the average value we get for a certain Random Variable when we repeat an experiment a large number of times. It is the theoretical mean of a Random Variable. Expected Value is based on population data. Therefore it is a parameter.
2022-09-28 17:10:39
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.802825391292572, "perplexity": 323.8916870990559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00472.warc.gz"}
http://mathhelpforum.com/trigonometry/201244-proving-trigonometric-id.html
# Thread: Proving of trigonometric ID 1. ## Proving of trigonometric ID (solved) Good Day, I'm stuck on proving the following trigonometric identity: (sin 2x + sin 4x + sin 6x) / (cos 2x + cos 4x + cos 6x) = tan 4x I do hope that some one can give me some advice on how to prove this as I've been going at this for the past 2 days and I feel that I'm not getting anywhere near. 2. ## Re: Proving of trigonometric ID Originally Posted by dd86 Good Day, I'm stuck on proving the following trigonometric identity: (sin 2x + sin 4x + sin 6x) / (cos 2x + cos 4x + cos 6x) = tan 4x I do hope that some one can give me some advice on how to prove this as I've been going at this for the past 2 days and I feel that I'm not getting anywhere near. Use the sum-to-product identities, $\displaystyle \sin\theta+\sin\phi = 2\sin\left(\frac{\theta+\phi}2\right)\cos\left( \frac{\theta-\phi}2\right)$ $\displaystyle \cos\theta+\cos\phi = 2\cos\left(\frac{\theta+\phi}2\right)\cos\left( \frac{\theta-\phi}2\right)$ We have $\displaystyle \frac{\sin2x+\sin4x+\sin6x}{\cos2x+\cos4x+\cos6x}$ $\displaystyle =\frac{\sin4x+\left(\sin2x + \sin6x\right)}{\cos4x + \left(\cos2x+\cos6x\right)}$ $\displaystyle =\frac{\sin4x+2\sin4x\cos2x}{\cos4x + 2\cos4x\cos2x}$ $\displaystyle =\frac{\sin4x\left(1+2\cos2x\right)}{\cos4x\left(1 + 2\cos2x\right)}$ $\displaystyle =\frac{\sin4x}{\cos4x} = \tan4x.$ Please note, however, that the cancellation step is not valid when $\displaystyle 1+2\cos2x = 0.$ The identity actually does not hold for these values of $\displaystyle x,$ as the original expression is undefined at such values. So the equation is only identically true for $\displaystyle x\neq\pi\left(k\pm\frac13\right)\!,\ \ k\in\mathbb{Z}.$ 3. ## Re: Proving of trigonometric ID Hi Reckoner, Thanks so much. Now it's clear where I went wrong. , , , , , , # jika cos2x cos4x =1/2 maka sin 4x 2 sin6x sin8x Click on a term to search for related topics.
2018-06-19 07:12:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9272706508636475, "perplexity": 911.7816121271275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861980.33/warc/CC-MAIN-20180619060647-20180619080647-00222.warc.gz"}
https://dcase.community/challenge2022/task-sound-event-localization-and-detection-evaluated-in-real-spatial-sound-scenes
# Sound Event Localization and Detection Evaluated in Real Spatial Sound Scenes ### Coordinators Archontis Politis Yuki Mitsufuji Kazuki Shimada Tuomas Virtanen Sharath Adavanne Parthasaarathy Sudarsanam Daniel Krause Naoya Takahashi Shusuke Takahashi Yuichiro Koyama The goal of the sound event localization and detection task is to detect occurences of sound events belonging to specific target classes, track their temporal activity, and estimate their directions-of-arrival or positions during it. Challenge has ended. Full results for this task can be found in the page. # Description Given multichannel audio input, a sound event detection and localization (SELD) system outputs a temporal activation track for each of the target sound classes, along with one or more corresponding spatial trajectories when the track indicates activity. This results in a spatio-temporal characterization of the acoustic scene that can be used in a wide range of machine cognition tasks, such as inference on the type of environment, self-localization, navigation without visual input or with occluded targets, tracking of specific types of sound sources, smart-home applications, scene visualization systems, and acoustic monitoring, among others. This year the challenge task changes considerably compared to the previous iterations since it transitions from computationally generated spatial recordings to recordings of real sound scenes, manually annotated. This change brings a number of significant differences in the task setup, detailed below. This is the fourth iteration of the task in the DCASE Challenge. The first 3 challenges were based on emulated multichannel recordings, generated from event sample banks spatialized with spatial room impulse responses (SRIRs) captured in various rooms and mixed with spatial ambient noise recorded at the same locations. At every successive iteration the acoustical conditions were increased in complexity, in order to bring the task closer to more challenging real-world conditions. A table showing basic differences between the previous 3 challenges follows: After the continuous development of the methods submitted in those challenges to tackle the SELD task, a natural step forward is testing of the net iteration of systems on real spatial sound scene recordings. A dataset of such recordings was collected and annotated for the challenge. This transition brings a number of differences and changes compared to the previous years - some of them are summarized below: # Audio dataset The Sony-TAu Realistic Spatial Soundscapes 2022 (STARSS22) dataset contains multichannel recordings of sound scenes in various rooms and environments, together with temporal and spatial annotations of prominent events belonging to a set of target classes. The dataset is collected in two different countries, in Tampere, Finland by the Audio Researh Group (ARG) of Tampere University, and in Tokyo, Japan by SONY, using a similar setup and annotation procedure. As in the previous challenges, the dataset is delivered in two spatial recording formats. The recordings were organized in recording sessions, with each session happening in a unique room. With a few exceptions, groups of participants, sound making props and scene scnarios were also unique for each session. Multiple self-contained 30sec - 6min recordings (clips) were captured in each such session. To achieve good variability and efficiency in the data, in terms of presence, density, movement, and/or spatial distribution of the sounds events, the scenes were loosely scripted. Collection of data from the TAU side has received funding from Google. A technical report on the dataset, including details on the challenge setup and the baseline, can be found in: Publication Archontis Politis, Kazuki Shimada, Parthasaarathy Sudarsanam, Sharath Adavanne, Daniel Krause, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Yuki Mitsufuji, and Tuomas Virtanen. Starss22: a dataset of spatial recordings of real scenes with spatiotemporal annotations of sound events. 2022. URL: https://arxiv.org/abs/2206.01948, arXiv:2206.01948. ## Recording and annotation procedure The sound scene recordings were captured with a high-channel-count spherical microphone array (Eigenmike em32 by mh Acoustics), simultaneously with a 360° video recording spatially aligned with the spherical array recording (Ricoh Theta V). Additionally, the main sound sources of interest were equipped with tracking markers, which are tracked throughout the recording with an Optitrack Flex 13 system arranged around each scene. All scenes were based on human actors performing some actions, interacting between them and with the objects in the scene, and were by design dynamic. Since the actors were producing most of the sounds in the scene (but not all), they were additionally equipped with DPA Wireless Go II microphones, providing close-miked recordings of the main events. Recording would start and stop according to a scene being acted, usually lasting between 1~5mins. Recording would start in all microphones and tracking devices before the beginning of the scene, and would stop right after. A clapper sound would initiate the acting and it would serve as a reference signal for synchronization between the em32 recording, the Ricoh Theta V video, the DPA wireless microphone recordings, and the Optitrack tracker data. Synchronized clips of all of them would be cropped and stored in the end of each recording session. By combining information from the wireless microphones, the optical tracking data, and the 360° videos, spatiotemporal annotations were extracted semi-automatically, and validated manually. More specifically, the actors were tracked all through each recording session wearing headbands with markers, and the spatial positions of other human-related sources, such as mouth, hands, or footsteps were geometrically extrapolated from those head coordinates. Additional trackers were mounted on other sources of interest (e.g. vacuum cleaner, guitar, water tap, cupboard, door handle, a.o.). Each actor had a wireless microphone mounted on their lapel, providing a clear recording of all sound events produced by that actor, and/or any independent sources closer to that actor than the rest. The temporal annotation was based primarily on those close-miked recordings. The annotators would annotate the sound event activity and label their class during the recording by listening those close-miked signals. Events that were not audible in the overall scene recording of the em32 were not annotated, even if they were audible in the lapel recordings. In ambiguous cases, the annotators could rely on the 360° video to associate an event with a certain actor or source. The final sound event temporal annotations were associated with the tracking data through the class of each sound event and the actor that produced them. All tracked Cartesian coordinates delivered by the tracker were converted to directions-of-arrival (DOAs) with respect to the coordinates of the Eigenmike. Finally, the final class, temporal, and spatial annotations were combined and converted to the challenge format. Validation of the annotations was done by observing videos of the activities of each class visualized as markers positioned at their respective DOAs on the 360° video plane, overlapped with the 360° from the Ricoh Theta V. ## Recording formats The array response of the two recording formats can be considered known. The following theoretical spatial responses (steering vectors) modeling the two formats describe the directional response of each channel to a source incident from direction-of-arrival (DOA) given by azimuth angle $$\phi$$ and elevation angle $$\theta$$. For the first-order ambisonics (FOA): \begin{eqnarray} H_1(\phi, \theta, f) &=& 1 \\ H_2(\phi, \theta, f) &=& \sin(\phi) * \cos(\theta) \\ H_3(\phi, \theta, f) &=& \sin(\theta) \\ H_4(\phi, \theta, f) &=& \cos(\phi) * \cos(\theta) \end{eqnarray} The (FOA) format is obtained by converting the 32-channel microphone array signals by means of encoding filters based on anechoic measurements of the Eigenmike array response. Note that in the formulas above the encoding format is assumed frequency-independent, something that holds true up to around 9kHz with the specific microphone array, while the actual encoded responses start to deviate gradually at higher frequencies from the ideal ones provided above. For the tetrahedral microphone array (MIC): The four microphone have the following positions, in spherical coordinates $$(\phi, \theta, r)$$: Since the microphones are mounted on an acoustically-hard spherical baffle, an analytical expression for the directional array response is given by the expansion: $$H_m(\phi_m, \theta_m, \phi, \theta, \omega) = \frac{1}{(\omega R/c)^2}\sum_{n=0}^{30} \frac{i^{n-1}}{h_n'^{(2)}(\omega R/c)}(2n+1)P_n(\cos(\gamma_m))$$ where $$m$$ is the channel number, $$(\phi_m, \theta_m)$$ are the specific microphone's azimuth and elevation position, $$\omega = 2\pi f$$ is the angular frequency, $$R = 0.042$$m is the array radius, $$c = 343$$m/s is the speed of sound, $$\cos(\gamma_m)$$ is the cosine angle between the microphone and the DOA, and $$P_n$$ is the unnormalized Legendre polynomial of degree $$n$$, and $$h_n'^{(2)}$$ is the derivative with respect to the argument of a spherical Hankel function of the second kind. The expansion is limited to 30 terms which provides negligible modeling error up to 20kHz. Example routines that can generate directional frequency and impulse array responses based on the above formula can be found here. ## Sound event classes 13 target sound event classes were annotated. The classes follow loosely the Audioset ontology. 1. Female speech, woman speaking 2. Male speech, man speaking 3. Clapping 4. Telephone 5. Laughter 6. Domestic sounds 7. Walk, footsteps 8. Door, open or close 9. Music 10. Musical instrument 11. Water tap, faucet 12. Bell 13. Knock The content of some of these classes corresponds to events of a limited range of Audioset-related subclasses. These are detailed here to aid the participants: • Telephone • Mostly traditional Telephone Bell Ringing and Ringtone sounds, without musical ringtones. • Domestic sounds • Sounds of Vacuum cleaner • Sounds of water boiler, closer to Boiling • Sounds of air circulator, closer to Mechanical fan • Door, open or close • Combination of Door and Cupboard open or close • Music • Background music and Pop music played by a loudspeaker in the room. • Musical Instrument • Acoustic guitar • Marimba, xylophone • Cowbell • Piano • Rattle (instrument) • Bell • Combination of sounds from hotel bell and glass bell, closer to Bicycle bell and single Chime. • The speech classes contain speech in a few different languages. • There are occasionally localized sound events that are not annotated and are considered as interferers, with examples such as computer keyboard, shuffling cards, dishes, pots, and pans. • There is natural background noise (e.g. HVAC noise) in all recordings, at very low levels in some and at quite high levels in others. Such mostly diffuse background noise should be distinct from other noisy target sources (e.g. vacuum cleaner, mechanical fan) since these are clearly spatially localized. ## Dataset specifications The specifications of the dataset can be summarized in the following: • 70 recording clips of 30 sec ~ 5 min durations, with a total time of ~2hrs, contributed by SONY (development dataset). • 51 recording clips of 1 min ~ 5 min durations, with a total time of ~3hrs, contributed by TAU (development dataset). • 52 recording clips of 40 sec ~ 5.5 min durations, with a total time of ~2hrs, contributed by SONY+TAU (evaluation dataset). • A training-test split is provided for reporting results using the development dataset. • 40 recordings contributed by SONY for the training split, captured in 2 rooms (dev-train-sony). • 30 recordings contributed by SONY for the testing split, captured in 2 rooms (dev-test-sony). • 27 recordings contributed by TAU for the training split, captured in 4 rooms (dev-train-tau). • 24 recordings contributed by TAU for the testing split, captured in 3 rooms (dev-test-tau). • A total of 11 unique rooms captured in the recordings, 4 from SONY and 7 from TAU (development set). • Sampling rate 24kHz. • Two 4-channel 3-dimensional recording formats: first-order Ambisonics (FOA) and tetrahedral microphone array (MIC). • Recordings are taken in two different countries and two different sites. • Each recording clip is part of a recording session happening in a unique room. • Groups of participants, sound making props, and scene scenarios are unique for each session (with a few exceptions). • To achieve good variability and efficiency in the data, in terms of presence, density, movement, and/or spatial distribution of the sounds events, the scenes are loosely scripted. • 13 target classes are identified in the recordings and strongly annotated by humans. • Spatial annotations for those active events are captured by an optical tracking system. • Sound events out of the target classes are considered as interference. • Occurences of up to 3 simultaneous events are fairly common, while higher numbers of overlapping events (up to 5) can occur but are rare. ## Reference labels and directions-of-arrival For each recording in the development dataset, the labels and DoAs are provided in a plain text CSV file of the same filename as the recording, in the following format: [frame number (int)], [active class index (int)], [source number index (int)], [azimuth (int)], [elevation (int)] Frame, class, and source enumeration begins at 0. Frames correspond to a temporal resolution of 100msec. Azimuth and elevation angles are given in degrees, rounded to the closest integer value, with azimuth and elevation being zero at the front, azimuth $$\phi \in [-180^{\circ}, 180^{\circ}]$$, and elevation $$\theta \in [-90^{\circ}, 90^{\circ}]$$. Note that the azimuth angle is increasing counter-clockwise ($$\phi = 90^{\circ}$$ at the left). The source index is a unique integer for each source in the scene, and it is provided only as additional information. Note that each unique actor gets assigned one such identifier, but not individual events produced by the same actor; e.g. a clapping event and a laughter event produced by the same person have the same identifier. Independent sources that are not actors (e.g. a loudspeaker playing music in the room) get a 0 identifier. Note that source identifier information is only included in the development metadata and is not required to be provided by the participants in their results. Overlapping sound events are indicated with duplicate frame numbers, and can belong to a different or the same class. An example sequence could be as: 10, 1, 1, -50, 30 11, 1, 1, -50, 30 11, 1, 2, 10, -20 12, 1, 2, 10, -20 13, 1, 2, 10, -20 13, 8, 0, -40, 0 which describes that in frame 10-11, an event of class male speech (class 1) belonging to one actor (source 1) is active at direction (-50°,30°). However, at frame 11 a second instance of the same class appears simultaneously at a different direction (10°,-20°) belonging to another actor (source 2), while at frame 13 an additional event of class music (class 8) appears belonging to a non-actor source (source 0). Frames that contain no sound events are not included in the sequence. The development and evaluation version of the dataset can be downloaded at: The development dataset is provided with a training/testing split. During the development stage, the testing split can be used for comparison with the baseline results and consistent reporting of results at the technical reports of the submitted systems, before the evaluation stage. • Note that even though there are two origins of the data, SONY and TAU, the challenge task considers the dataset as a single entity. Hence models should not be trained separately for each of the two origins, and tested individually on recordings of each of them. Instead, the recordings of the individual training splits (dev-train-sony, dev_train_tau) and testing splits (dev-test-sony, dev_test_tau) should be combined (dev_train, dev_test) and the models should be trained and evaluated in the respective combined splits. • The participants can choose to use as input to their models one of the two formats, FOA or MIC, or both simultaneously. The evaluation dataset will be released a few weeks before the final submission deadline. That dataset consists of only audio recordings without any metadata/labels. At the evaluation stage, participants can decide the training procedure, i.e. the amount of training and validation files to be used in the development dataset, the number of ensemble models etc., and submit the results of the SELD performance on the evaluation dataset. ## Development dataset The recordings in the development dataset follow the naming convention: fold[fold number]_room[room number]_mix[recording number per room].wav The fold number at the moment is used only to distinguish between the training and testing split. The room information is provided for the user of the dataset to potentially help understand the performance of their method with respect to different conditions. ## Evaluation dataset The evaluation dataset will consist of recordings without any information on the origin (SONY or TAU) or on the location in the naming convention, as below: mix[recording number].wav ## External data Since the development set contains recordings of real scenes, the presence of each class and the density of sound events varies greatly. To enable more effective training of models to detect and localize all target classes, apart from spatial and spectrotemporal augmentation of the development set, we additionally allow use of external datasets as long as they are publicly available. External data examples are sound sample banks, annotated sound event datasets, pre-trained models, room and array impulse response libraries. A typical use case could be in the form of sound event datasets containing the target classes, which can be used to generate additional spatial mixtures. Some possible examples of spatialization are: • using the theoretical steering vectors of any of the two formats presented earlier to emulate anechoic mixtures, with the possibility of background noise recordings decorrelated and added as diffuse across channels, • using the theoretical steering vectors of any of the two formats presented earlier and a room simulator to spatialize isolated event samples in reverberant conditions, • using isolated event samples convolved with measured multichannel room impulse responses of any of the two formats, to emulate spatial sound scenes with real reverberation profiles. The following rules apply on the use of external data: • The external datasets or pre-trained models used should be freely and publicly accessible before 15 April 2022. • Participants should inform the organizers in advance about such data sources, so that all competitors know about them and have an equal opportunity to use them. Please send an email or message in the Slack channel to the task coordinators if you intend to use a dataset or pre-trained model not on the list; we will update a list of external data in the webpage accordingly. • The participants will have to indicate clearly which external data they have used in their system info and technical report. • Once the evaluation set is published, no further requests will be taken and no further external sources will be added to the list. FSD50K audio 02.10.2020 https://zenodo.org/record/4060432 ESC-50 audio 13.10.2015 https://github.com/karolpiczak/ESC-50 Wearable SELD dataset audio 17.02.2022 https://zenodo.org/record/6030111 IRMAS audio 08.09.2014 https://zenodo.org/record/1290750 Kinetics 400 audio, video 22.05.2017 https://www.deepmind.com/open-source/kinetics SSAST pre-trained model 10.02.2022 https://github.com/YuanGongND/ssast TAU-NIGENS Spatial Sound Events 2020 audio 06.04.2020 https://zenodo.org/record/4064792 TAU-NIGENS Spatial Sound Events 2021 audio 28.02.2021 https://zenodo.org/record/5476980 PANN pre-trained model 19.10.2020 https://github.com/qiuqiangkong/audioset_tagging_cnn CSS10 Japanese audio 05.08.2019 https://www.kaggle.com/datasets/bryanpark/japanese-single-speaker-speech-dataset VoxCeleb1 audio, video 26.06.2017 https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html ## Example external data use with baseline The baseline is trained with a combination of the recordings in the development set and synthetic recordings, generated through convolution of isolated sound samples with real spatial room impulse responses (SRIRs) captured in various spaces of Tampere University. The training can be summarized by the following steps: 1. Sound samples for the target classes were sourced from the FSD50K 2. dataset. The samples were selected based on their labels having only one of the classes of interest, and the annotator rating present and predominant. 3. The sound samples were spatialized using the same SRIRs as the ones used to generate the TAU-NIGENS Spatial Sound Events 2020 and TAU-NIGENS Spatial Sound Events 2021 datasets of synthetic sound scenes. The generation was done with a similar procedure and code as in the latter dataset. 4. 1200 1-minute long scene recordings were generated for both formats, with a maximum polyphony of 2 and no directional interference. 5. Some additional tuning of event signal energies in the recordings was performed during generation, to better match the event signal energy distribution found in the development dataset. 6. The synthesized recordings were mixed with the real recordings from the development training set. 7. The baseline model was trained on this mixed training set and evaluated on the development testing set. For reproducibility, we share the generated recordings here, along with a list of FSD files used for the generation: For participants that would like to use a similar process as above generating their own data with such measured SRIRs, we have published the responses of 9 rooms here: Additionally, a python version of the generator code to spatialize and layer the sound events and mix ambient noise, as in our synthesized data, is shared here: The SELD data generator code is functional, but still WIP, with the code being quite "rough" and not well documented yet. We will be taking care of those issues during the development phase of the challenge. For problems or questions on its use contact Daniel Krause or Archontis Politis from the organizers. • Use of external data is allowed, as long as they are publicly available. Check the section on external data for more instructions. • Manipulation of the provided training-test split in the development dataset is not allowed. • Participants are not allowed to make subjective judgments of the evaluation data, nor to annotate it. The evaluation dataset cannot be used to train the submitted system. • The development dataset can be augmented e.g. using techniques such as pitch shifting or time stretching, respatialization or re-reverberation of parts, etc. # Submission The results for each of the recordings in the evaluation dataset should be collected in individual CSV files. Each result file should have the same name as the file name of the respective audio recording, but with the .csv extension, and should contain the same information at each row as the reference labels, excluding the source index: [frame number (int)],[active class index (int)],[azimuth (int)],[elevation (int)] Enumeration of frame and class indices begins at zero. The class indices are as ordered in the class descriptions mentioned above. The evaluation will be performed at a temporal resolution of 100msec. In case the participants use a different frame or hop length for their study, we expect them to use a suitable method to resample the information at the specified resolution before submitting the evaluation results. In addition to the CSV files, the participants are asked to update the information of their method in the provided file and submit a technical report describing the method. We allow upto 4 system output submissions per participant/team. For each system, meta-information should be provided in a separate file, containing the task specific information. All files should be packaged into a zip file for submission. The detailed information regarding the challenge information can be found in the submission page. General information for all DCASE submissions can be found on the Submission page. # Evaluation The evaluation is based on metrics evaluating jointly detection and localization performance and are similar to the ones used in the previous 2 challenges, with a few differences detailed below. ## Metrics The metrics are based on true positives ($$TP$$) and false positives ($$FP$$) determined not only by correct or wrong detections, but also based on if they are closer or further than a distance threshold $$T^\circ$$ (angular in our case) from the reference. For the evaluation of this challenge we take this threshold to be $$T = 20^\circ$$. More specifically, for each class $$c\in[1,...,C]$$ and each frame or segment: • $$P_c$$ predicted events of class $$c$$ are associated with $$R_c$$ reference events of class $$c$$ • false negatives are counted for misses: $$FN_c = \max(0, R_c-P_c)$$ • false positives are counted for extraneous predictions: $$FP_{c,\infty}=\max(0,P_c-R_c)$$ • $$K_c$$ predictions are spatially associated with references based on Hungarian algorithm: $$K_c=\min(P_c,R_c)$$. Those can also be considered as the unthresholded true positives $$TP_c = K_c$$. • the spatial threshold is applied which moves $$L_c\leq K_c$$ predictions further than threhold to false positives: $$FP_{c,\geq20^\circ} = L_c$$, and $$FP_c = FP_{c,\infty}+FP_{c,\geq20^\circ}$$ • the remaining matched estimates per class are counted as true positives: $$TP_{c,\leq20^\circ} = K_c-FP_{c,\geq20^\circ}$$ • finally: predictions $$P_c = TP_{c,\leq20^\circ}+ FP_c$$, but references $$R_c = TP_{c,\leq20^\circ}+FP_{c,\geq20^\circ}+FN_c$$ Based on those, we form the location-dependent F1-score ($$F_{\leq 20^\circ}$$) and Error Rate ($$ER_{\leq 20^\circ}$$). Contrary to the previous challenges, in which $$F_{\leq 20^\circ}$$ was micro-averaged, in this challenge we perform macro-averaging of the location-dependent F1-score: $$F_{\leq 20^\circ}= \sum_c F_{c,\leq 20^\circ}/C$$. Additionally, we evaluate localization accuracy through a class-dependent localization error $$LE_c$$, computed as the mean angular error of the matched true positives per class, and then macro-averaged: • $$LE_c = \sum_k \theta_k/ K_c = \sum_k \theta_k /TP_c$$ for each frame or segment, with $$\theta_k$$ being the angular error between the $$k$$th matched prediction and reference, • and after averaging across all frames that have any true positives, $$LE_{CD} = \sum_c LE_c/C$$. Complementary to the localization error, we compute a localization recall metric per class, also macro-averaged: • $$LR_c = K_c/R_c = TP_c/(TP_c + FN_c)$$, and • $$LR_{CD} = \sum_c LR_c/C$$. Note that the localization error and recall are not thresholded in order to give more varied complementary information to the location-dependent F1-score, presenting localization accuracy outside of the spatial threshold. All metrics are computed in one-second non-overlapping frames. For a more thorough analysis on the joint SELD metrics please refer to: Publication Archontis Politis, Annamaria Mesaros, Sharath Adavanne, Toni Heittola, and Tuomas Virtanen. Overview and evaluation of sound event localization and detection in dcase 2019. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:684–698, 2020. URL: https://ieeexplore.ieee.org/abstract/document/9306885. #### Overview and Evaluation of Sound Event Localization and Detection in DCASE 2019 ##### Abstract Sound event localization and detection is a novel area of research that emerged from the combined interest of analyzing the acoustic scene in terms of the spatial and temporal activity of sounds of interest. This paper presents an overview of the first international evaluation on sound event localization and detection, organized as a task of the DCASE 2019 Challenge. A large-scale realistic dataset of spatialized sound events was generated for the challenge, to be used for training of learning-based approaches, and for evaluation of the submissions in an unlabeled subset. The overview presents in detail how the systems were evaluated and ranked and the characteristics of the best-performing systems. Common strategies in terms of input features, model architectures, training approaches, exploitation of prior knowledge, and data augmentation are discussed. Since ranking in the challenge was based on individually evaluating localization and event classification performance, part of the overview focuses on presenting metrics for the joint measurement of the two, together with a reevaluation of submissions using these new metrics. The new analysis reveals submissions that performed better on the joint task of detecting the correct type of event close to its original location than some of the submissions that were ranked higher in the challenge. Consequently, ranking of submissions which performed strongly when evaluated separately on detection or localization, but not jointly on both, was affected negatively. ## Ranking Overall ranking will be based on the cumulative rank of the metrics mentioned above, sorted in ascending order. By cumulative rank we mean the following: if system A was ranked individually for each metric as $$ER:1, F1:1, LE:3, LR: 1$$, then its cumulative rank is $$1+1+3+1=6$$. Then if system B has $$ER:3, F1:2, LE:2, LR:3$$ (10), and system C has $$ER:2, F1:3, LE:1, LR:2$$ (8), then the overall rank of the systems is A,C,B. If two systems end up with the same cumulative rank, then they are assumed to have equal place in the challenge, even though they will be listed alphabetically in the ranking tables. # Results The SELD task received 63 submissions in total from 19 teams across the world. Main results for these submissions are as following (the table below includes only the best performing system per submitting team): Rank Submission Information Evaluation dataset Submission Corresponding author Affiliation Error Rate (20°) F-score (20°) Localization error (°) Localization recall 1 Du_NERCSLIP_task3_2 Jun Du University of Science and Technology of China 0.35 (0.30 - 0.41) 58.3 (53.8 - 64.7) 14.6 (12.8 - 16.5) 73.7 (68.7 - 78.2) 5 Hu_IACAS_task3_3 Jinbo Hu Institute of Acoustics, Chinese Academy of Sciences 0.39 (0.34 - 0.44) 55.8 (51.2 - 61.1) 16.2 (14.6 - 17.8) 72.4 (67.3 - 77.2) 7 Han_KU_task3_4 Sung Won Han Korea University 0.37 (0.31 - 0.42) 49.7 (44.4 - 56.6) 16.5 (14.8 - 18.0) 70.7 (65.8 - 76.1) 11 Xie_UESTC_task3_1 Rong Xie University of Electronic Science and Technology of China 0.48 (0.41 - 0.55) 48.6 (42.5 - 55.4) 17.6 (16.0 - 19.2) 73.5 (68.0 - 77.6) 14 Bai_JLESS_task3_4 Jisheng Bai Northwestern Polytechnical University 0.47 (0.40 - 0.54) 49.3 (41.8 - 57.1) 16.9 (15.0 - 18.9) 67.9 (59.3 - 73.3) 17 Kang_KT_task3_2 Sang-Ick Kang KT Corporation 0.47 (0.40 - 0.53) 45.9 (40.1 - 52.6) 15.8 (13.6 - 18.0) 59.3 (50.3 - 65.1) 42 FOA_Baseline_task3_1 Archontis Politis Tampere University 0.61 (0.57 - 0.65) 23.7 (18.7 - 29.4) 22.9 (21.0 - 26.0) 51.4 (46.2 - 55.2) 27 Chun_Chosun_task3_3 Chanjun Chun Chosun University 0.59 (0.52 - 0.66) 31.0 (25.9 - 36.3) 19.8 (17.3 - 22.6) 50.7 (42.2 - 56.3) 33 Guo_XIAOMI_task3_2 Kaibin Guo Xiaomi 0.60 (0.53 - 0.67) 28.2 (22.8 - 34.1) 23.8 (21.3 - 26.2) 52.1 (43.4 - 58.1) 30 Scheibler_LINE_task3_1 Robin Scheibler LINE Corporation 0.62 (0.55 - 0.69) 30.4 (25.2 - 36.3) 16.7 (14.0 - 19.5) 49.2 (42.1 - 54.5) 38 Park_SGU_task3_4 Hyung-Min Park Sogang University 0.60 (0.53 - 0.67) 30.6 (25.2 - 36.4) 21.6 (17.8 - 25.1) 45.9 (40.3 - 51.0) 33 Wang_SJTU_task3_2 Yu Wang Shanghai Jiao Tong University 0.67 (0.60 - 0.74) 27.0 (19.3 - 33.6) 24.4 (22.0 - 27.1) 60.3 (53.8 - 65.3) 52 FalconPerez_Aalto_task3_2 Ricardo Falcon-Perez Aalto University 0.73 (0.67 - 0.79) 21.8 (15.5 - 27.6) 24.4 (21.7 - 27.1) 43.1 (35.7 - 48.7) 46 Kim_KU_task3_2 Gwantae Kim Korea University 0.74 (0.66 - 0.81) 24.1 (19.8 - 28.9) 26.6 (23.4 - 29.8) 55.1 (48.6 - 59.5) 65 Chen_SHU_task3_1 Zhengyu Chen Shanghai University 1.00 (1.00 - 1.00) 0.3 (0.1 - 0.6) 60.3 (45.4 - 94.0) 4.5 (2.9 - 6.3) 53 Wu_NKU_task3_2 Shichao Wu Nankai University 0.69 (0.64 - 0.74) 17.9 (14.4 - 21.5) 28.5 (24.5 - 39.7) 44.5 (38.2 - 48.4) 23 Ko_KAIST_task3_2 Byeong-Yun Ko Korea Advanced Institute of Science and Technology 0.49 (0.42 - 0.55) 39.9 (33.8 - 46.0) 17.3 (15.3 - 19.3) 54.6 (46.5 - 60.5) 48 Kapka_SRPOL_task3_4 Slawomir Kapka Samsung Research Poland 0.72 (0.65 - 0.79) 25.5 (21.3 - 30.4) 25.4 (21.7 - 29.3) 49.8 (42.8 - 55.3) 60 Zhaoyu_LRVT_task3_1 Zhaoyu Yan Lenovo Research 0.96 (0.88 - 1.00) 11.2 (8.8 - 13.9) 31.0 (28.5 - 33.4) 53.4 (44.4 - 58.9) 44 Xie_XJU_task3_1 Yin Xie Xinjiang university 0.66 (0.59 - 0.74) 25.5 (19.3 - 32.2) 23.1 (19.9 - 26.4) 53.1 (42.7 - 59.4) Complete results and technical reports can be found in the # Baseline system Similarly to the previous iterations of the challenge, as the baseline we use a straightforward convolutional recurrent neural netowrk (CRNN) based on SELDnet, but with a few important modifications. Publication Sharath Adavanne, Archontis Politis, Joonas Nikunen, and Tuomas Virtanen. Sound event localization and detection of overlapping sources using convolutional recurrent neural networks. IEEE Journal of Selected Topics in Signal Processing, 13(1):34–48, March 2018. URL: https://ieeexplore.ieee.org/abstract/document/8567942, doi:10.1109/JSTSP.2018.2885636. #### Sound Event Localization and Detection of Overlapping Sources Using Convolutional Recurrent Neural Networks ##### Abstract In this paper, we propose a convolutional recurrent neural network for joint sound event localization and detection (SELD) of multiple overlapping sound events in three-dimensional (3D) space. The proposed network takes a sequence of consecutive spectrogram time-frames as input and maps it to two outputs in parallel. As the first output, the sound event detection (SED) is performed as a multi-label classification task on each time-frame producing temporal activity for all the sound event classes. As the second output, localization is performed by estimating the 3D Cartesian coordinates of the direction-of-arrival (DOA) for each sound event class using multi-output regression. The proposed method is able to associate multiple DOAs with respective sound event labels and further track this association with respect to time. The proposed method uses separately the phase and magnitude component of the spectrogram calculated on each audio channel as the feature, thereby avoiding any method- and array-specific feature extraction. The method is evaluated on five Ambisonic and two circular array format datasets with different overlapping sound events in anechoic, reverberant and real-life scenarios. The proposed method is compared with two SED, three DOA estimation, and one SELD baselines. The results show that the proposed method is generic and applicable to any array structures, robust to unseen DOA values, reverberation, and low SNR scenarios. The proposed method achieved a consistently higher recall of the estimated number of DOAs across datasets in comparison to the best baseline. Additionally, this recall was observed to be significantly better than the best baseline method for a higher number of overlapping sound events. ##### Keywords Direction-of-arrival estimation;Estimation;Task analysis;Azimuth;Microphone arrays;Recurrent neural networks;Sound event detection;direction of arrival estimation;convolutional recurrent neural network ## Baseline changes Compared to the DCASE2021 and the associated published SELDnet version, a few modifications have been integrated in the model, in order to take into account some of the simplest effective improvements demonstrated by the participants in the previous year. In DCASE2021 the baseline adopted the ACCDOA representation for training localization and detection with a single unified regression vector loss, succesfully demonstrated in the challenge of DCASE2020 and adopted by many other participants during DCASE2021. In this challenge, we maintain the ACCDOA representation but with an additional recent extension in order to make it suitable for handling simultaneous events of the same class, presented by Shimada et al. in the paper: Publication Kazuki Shimada, Yuichiro Koyama, Shusuke Takahashi, Naoya Takahashi, Emiru Tsunoo, and Yuki Mitsufuji. Multi-accdoa: localizing and detecting overlapping sounds from the same class with auxiliary duplicating permutation invariant training. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Singapore, Singapore, May 2022. #### Multi-ACCDOA: Localizing and Detecting Overlapping Sounds from the Same Class with Auxiliary Duplicating Permutation Invariant Training ##### Abstract Sound event localization and detection (SELD) involves identifying the direction-of-arrival (DOA) and the event class. The SELD methods with a class-wise output format make the model predict activities of all sound event classes and corresponding locations. The class-wise methods can output activity-coupled Cartesian DOA (ACCDOA) vectors, which enable us to solve a SELD task with a single target using a single network. However, there is still a challenge in detecting the same event class from multiple locations. To overcome this problem while maintaining the advantages of the class-wise format, we extended ACCDOA to a multi one and proposed auxiliary duplicating permutation invariant training (ADPIT). The multi- ACCDOA format (a class- and track-wise output format) enables the model to solve the cases with overlaps from the same class. The class-wise ADPIT scheme enables each track of the multi-ACCDOA format to learn with the same target as the single-ACCDOA format. In evaluations with the DCASE 2021 Task 3 dataset, the model trained with the multi-ACCDOA format and with the class-wise ADPIT detects overlapping events from the same class while maintaining its performance in the other cases. Also, the proposed method performed comparably to state-of-the-art SELD methods with fewer parameters. Another modification is the addition of alternative input spatial features for the MIC format of the dataset, which apart from generalized cross-correlation (GCC) vectors now include the frequency-normalized inter-channel phase differences as defined by Nguyen et al. in their recent work: Publication Thi Ngoc Tho Nguyen, Douglas L. Jones, Karn N. Watcharasupat, Huy Phan, and Woon-Seng Gan. SALSA-Lite: A fast and effective feature for polyphonic sound event localization and detection with microphone arrays. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Singapore, Singapore, May 2022. #### SALSA-Lite: A fast and effective feature for polyphonic sound event localization and detection with microphone arrays ##### Abstract Polyphonic sound event localization and detection (SELD) has many practical applications in acoustic sensing and monitoring. However, the development of real-time SELD has been limited by the demanding computational requirement of most recent SELD systems. In this work, we introduce SALSA-Lite, a fast and effective feature for polyphonic SELD using microphone array inputs. SALSA-Lite is a lightweight variation of a previously proposed SALSA feature for polyphonic SELD. SALSA, which stands for Spatial Cue-Augmented Log-Spectrogram, consists of multichannel log-spectrograms stacked channelwise with the normalized principal eigenvectors of the spectrotemporally corresponding spatial covariance matrices. In contrast to SALSA, which uses eigenvector-based spatial features, SALSA-Lite uses normalized inter-channel phase differences as spatial features, allowing a 30-fold speedup compared to the original SALSA feature. Experimental results on the TAU-NIGENS Spatial Sound Events 2021 dataset showed that the SALSA-Lite feature achieved competitive performance compared to the full SALSA feature, and significantly outperformed the traditional feature set of multichannel log-mel spectrograms with generalized cross-correlation spectra. Specifically, using SALSA-Lite features increased localization-dependent F1 score and class-dependent localization recall by 15% and 5%, respectively, compared to using multichannel log-mel spectrograms with generalized cross-correlation spectra. That modification was introduced in order to bring the baseline performance on the microphone array (MIC) format closer to the ambisonic (FOA) one, with respect to the large difference observed in the DCASE2021 challenge attributed mainly to the use of GCC features in complex multi-source conditions. ## Repository The baseline, along with more details on it, can be found in: ## Results for the development dataset The evaluation metric scores for the test split of the development dataset are given below. The location-dependent detection metrics are computed within a 20° threshold from the reference. Dataset ER20° F20°(micro) F20°(macro) LECD LRCD Ambisonic 0.71 36 % 21 % 29.3° 46 % Microphone array 0.71 36 % 18 % 32.2° 47 % Note: The reported baseline system performance is not exactly reproducible due to varying setups. However, you should be able to obtain very similar results. # Citation A technical report with more details on the collection, annotation, and specifications of the dataset, along with analysis of the baseline and its properties will be provided soon. If you are participating in this task or using the dataset and code please consider citing the following papers: Publication Archontis Politis, Kazuki Shimada, Parthasaarathy Sudarsanam, Sharath Adavanne, Daniel Krause, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Yuki Mitsufuji, and Tuomas Virtanen. Starss22: a dataset of spatial recordings of real scenes with spatiotemporal annotations of sound events. 2022. URL: https://arxiv.org/abs/2206.01948, arXiv:2206.01948. #### STARSS22: A dataset of spatial recordings of real scenes with spatiotemporal annotations of sound events Publication Archontis Politis, Annamaria Mesaros, Sharath Adavanne, Toni Heittola, and Tuomas Virtanen. Overview and evaluation of sound event localization and detection in dcase 2019. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:684–698, 2020. URL: https://ieeexplore.ieee.org/abstract/document/9306885. #### Overview and Evaluation of Sound Event Localization and Detection in DCASE 2019 ##### Abstract Sound event localization and detection is a novel area of research that emerged from the combined interest of analyzing the acoustic scene in terms of the spatial and temporal activity of sounds of interest. This paper presents an overview of the first international evaluation on sound event localization and detection, organized as a task of the DCASE 2019 Challenge. A large-scale realistic dataset of spatialized sound events was generated for the challenge, to be used for training of learning-based approaches, and for evaluation of the submissions in an unlabeled subset. The overview presents in detail how the systems were evaluated and ranked and the characteristics of the best-performing systems. Common strategies in terms of input features, model architectures, training approaches, exploitation of prior knowledge, and data augmentation are discussed. Since ranking in the challenge was based on individually evaluating localization and event classification performance, part of the overview focuses on presenting metrics for the joint measurement of the two, together with a reevaluation of submissions using these new metrics. The new analysis reveals submissions that performed better on the joint task of detecting the correct type of event close to its original location than some of the submissions that were ranked higher in the challenge. Consequently, ranking of submissions which performed strongly when evaluated separately on detection or localization, but not jointly on both, was affected negatively.
2023-02-07 21:30:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.2634175717830658, "perplexity": 3248.077593505072}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500641.25/warc/CC-MAIN-20230207201702-20230207231702-00446.warc.gz"}
https://tex.stackexchange.com/questions/455725/a-drawing-issue-regarding-the-forest-package-and-tikz
# A drawing issue regarding the forest package and tikz I provide the following MWE regarding the construction of a semantic tableaux for modal logic. \documentclass[a4paper,twoside,10pt]{memoir} \usepackage{alphabeta} \usepackage {tikz} \usepackage {forest} \usetikzlibrary {positioning,graphs} \usepackage{graphicx} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsthm} \usepackage{mathtools} \begin{document} \begin{forest} [$F(\Box A\wedge\Box B)\rightarrow\Box(A\wedge B)$, name=root, tikz={\node [draw,red,fit=(!1)(!ll)] {};} [ $T(\Box A\wedge\Box B)$, name=nodeA [ $F\:\Box(A\wedge B)$, name=nodeB, tikz={\node [draw,red,fit=(!1)(!ll)] {};} [ $T\Box A$, name=nodeC [ $T\:\Box B$, name=nodeD [ $F(A\wedge B)$, name=nodeE [ $FA$, rectangle, draw [ $TA$, rectangle, draw, name=nodeF [ $\otimes$ ] ] ] [ $FB$, rectangle, draw [ $TB$, rectangle, draw, name=nodeG [ $\otimes$ ] ] ] ] ] ] ] ] ] \draw[->] (nodeA) to [out=west, in=west] (nodeC); \draw[->] (nodeB) to [out=east, in=east] (nodeE); \end{forest} \end{document} I need to improve the drawing of the arrows .. the main issue, is to use the nodes enclosed in the second rectangle as a single (virtual) node, and draw the tip (namely, the end) of the left arrow (starting from the second node), in the middle of the left vertical side of the second rectangle ... it would also be great, if the arrows did not intersect the borders of the rectangles (but this is a secondary and not so important issue).. I did not precisely know how to read your question so I added a second possible interpretation in blue, hoping one of them is what you're after. \documentclass[a4paper,twoside,10pt]{memoir} \usepackage{alphabeta} \usepackage {tikz} \usepackage {forest} \usetikzlibrary {positioning,graphs} \usepackage{graphicx} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsthm} \usepackage{mathtools} \begin{document} \begin{forest} [$F(\Box A\wedge\Box B)\rightarrow\Box(A\wedge B)$, name=root, [ $T(\Box A\wedge\Box B)$, name=nodeA [ $F\:\Box(A\wedge B)$, name=nodeB, [ $T\Box A$, name=nodeC [ $T\:\Box B$, name=nodeD [ $F(A\wedge B)$, name=nodeE [ $FA$, rectangle, draw [ $TA$, rectangle, draw, name=nodeF [ $\otimes$ ] ] ] [ $FB$, rectangle, draw [ $TB$, rectangle, draw, name=nodeG [ $\otimes$ ] ] ] ] ] ] ] ] ] \node [draw,red,fit=(nodeA)(nodeB)] (fit1) {}; \node [draw,red,fit=(nodeC)(nodeD)] (fit2) {}; \draw[->] (nodeA-|fit1.west) to [out=west, in=west] (nodeC-|fit2.west); \draw[->] (nodeB-|fit1.east) to [out=east, in=east] (nodeE); \draw[->,blue] (fit1.west) to [out=west, in=west] (fit2.west); \draw[->,blue] (fit1.east) to [out=east, in=east] (nodeE); \end{forest} \end{document} • Yes, this is what I want, thanks for your effort. – Athanasios Margaris Oct 18 '18 at 19:26 • Is it possible to remove completely the short vertical line between the nodes in the first red rectangle, and type the strings one after the other? this is useful in some cases and i need it – Athanasios Margaris Oct 19 '18 at 7:52 • I found that I can write the formulas in the same node and then split them in two lines using \\ ... however, they will still belong to the same node .. i want for each formula to be in its own separate node in order to draw some arrows – Athanasios Margaris Oct 19 '18 at 8:28 • @AthanasiosMargaris I am wondering if you want to use forest at all for the upper part of the diagram. And I guess that you are after a TikZ matrix of nodes. Yet I am not sure if it is easy to combine this with forest (seems like no one tried this), but I do not think you need forest for the upper part of the diagram. BTW, according to the rules of the sites you should ask a separate question for this additional request. Asking questions is free, after all. – user121799 Oct 19 '18 at 14:43
2019-08-24 17:05:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9283022284507751, "perplexity": 1550.3863761464495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321160.93/warc/CC-MAIN-20190824152236-20190824174236-00125.warc.gz"}
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=E1BMAX_2016_v53n3_885
HYPERSURFACES IN 𝕊4 THAT ARE OF Lk-2-TYPE Title & Authors HYPERSURFACES IN 𝕊4 THAT ARE OF Lk-2-TYPE Lucas, Pascual; Ramirez-Ospina, Hector-Fabian; Abstract In this paper we begin the study of $\small{L_k}$-2-type hypersurfaces of a hypersphere $\small{{\mathbb{S}}^{n+1}{\subset}{\mathbb{R}}^{n+2}}$ for $\small{k{\geq}1}$ Let $\small{{\psi}:M^3{\rightarrow}{\mathbb{S}}^4}$ be an orientable $\small{H_k}$-hypersurface, which is not an open portion of a hypersphere. Then $\small{M^3}$ is of $\small{L_k}$-2-type if and only if $\small{M^3}$ is a Clifford tori $\small{{\mathbb{S}}^1(r_1){\times}{\mathbb{S}}^2(r_2)}$, $r^2_1+r^2_2 Keywords linearized operator $\small{L_k}$;$\small{L_k}$-finite-type hypersurface;higher order mean curvatures;Newton transformations; Language English Cited by References 1. L. J. Alias, A. Ferrandez, and P. Lucas, Surfaces in the 3-dimensional LorentzMinkowski space satisfying${\Delta}_{x}$= A$_{x}$+ B, Pacific J. Math. 156 (1992), no. 2, 201-208. 2. L. J. Alias, A. Ferrandez, and P. Lucas, Submanifolds in pseudo-Euclidean spaces satisfying the condition${\Delta}x$= Ax+B, Geom. Dedicata 42 (1992), no. 3, 345-354. 3. L. J. Alias, A. Ferrandez, and P. Lucas, Hypersurfaces in space forms satisfying the condition${\Delta}x$= Ax + B, Trans. Amer. Math. Soc. 347 (1995), no. 5, 1793-1801. 4. L. J. Alias and N. Gurbuz, An extension of Takahashi theorem for the linearized operators of the higher order mean curvatures, Geom. Dedicata 121 (2006), 113-127. 5. L. J. Alias and M. B. Kashani, Hypersurfaces in space forms satisfying the condition$L_k{\Psi}\;=\;A{\Psi}+b$, Taiwanese J. Math. 14, no. 5 (2010), no. 5, 1957-1978. 6. M. Barros and O. J. Garay, 2-type surfaces in$\mathbb{S}^3$, Geom. Dedicata 24 (1987), no. 3, 329-336. 7. E. Cartan, Familles de surfaces isoparametriques dans les espaces a courbure constante, Ann. Mat. Pura Appl. 17 (1938), no. 1, 177-191. 8. E. Cartan, Sur des familles remarquables d'hypersurfaces isoparametriques dans les espaces spheriques, Math. Z. 45 (1939), 335-367. 9. E. Cartan, Sur quelque familles remarquables d'hypersurfaces, C. R. Congres Math. Liege (1939), 30-41. 10. S. Chang, A closed hypersurface of constant scalar curvature and constant mean curvature in$S^{4}$is isoparametric, Comm. Anal. Geom. 1 (1993), 71-100. 11. S. Chang, On closed hypersurfaces of constant scalar curvatures and mean curvatures in$S^{n+1}$, Pacific J. Math. 165 (1994), no. 1, 67-76. 12. B. Y. Chen, Total Mean Curvature and Submanifolds of Finite Type, Series in Pure Mathematics, 1. World Scientific Publishing Co., Singapore, 1984. 13. B. Y. Chen, Finite Type Submanifolds and Generalizations, University of Rome, Rome, 1985. 14. B. Y. Chen, Finite type submanifolds in pseudo-Euclidean spaces and applications, Kodai Math. J. 8 (1985), no. 3, 358-375. 15. B. Y. Chen, 2-type submanifolds and their applications, Chinese J. Math. 14 (1986), no. 1, 1-14. 16. B. Y. Chen, Tubular hypersurfaces satisfying a basic equality, Soochow J. Math. 20 (1994), no. 4, 569-586. 17. B. Y. Chen, A report on submanifolds of finite type, Soochow J. Math. 22 (1996), no. 2, 117-337. 18. B. Y. Chen, Some open problems and conjectures on submanifolds of finite type: recent development, Tamkang J. Math. 45 (2014), no. 1, 87-108. 19. B. Y. Chen, M. Barros, and O. J. Garay, Spherical finite type hypersurfaces, Alg. Groups Geom. 4 (1987), no. 1, 58-72. 20. B. Y. Chen and M. Petrovic, On spectral decomposition of immersions of finite type, Bull. Austral. Math. Soc. 44 (1991), no. 1, 117-129. 21. S. De Almeida and F. Brito, Closed 3-dimensional hypersurfaces with constant mean curvature and constant scalar curvature, Duke Math. J. 61 (1990), no. 1, 195-206. 22. F. Dillen, J. Pas, and L. Verstraelen, On surfaces of finite type in Euclidean 3-space, Kodai Math. J. 13 (1990), no. 1, 10-21. 23. V. N. Faddeeva, Computational Methods of Linear Algebra, Dover Publ. Inc, 1959, New York. 24. O. J. Garay, An extension of Takahashi's theorem, Geom. Dedicata 34 (1990), no. 2, 105-112. 25. T. Hasanis and T. Vlachos, A local classification of 2-type surfaces in$S^{3}$, Proc. Amer. Math. Soc. 112, no. 2 (1991), 533-538. 26. T. Hasanis and T. Vlachos, Spherical 2-type hypersurfaces, J. Geometry 40 (1991), no. 1-2, 82-94. 27. T. Hasanis and T. Vlachos, Hypersurfaces of$E^{n+1}$satisfying${\Delta}x$= Ax + B, J. Austral. Math. Soc. Ser. A 53 (1992), no. 3, 377-384. 28. S. M. B. Kashani, On some$L_{1}$-finite type (hyper)surfaces in$\mathbb{R}^{n+1}$, Bull. Korean Math. Soc. 46 (2009), no. 1, 35-43. 29. U. J. J. Leverrier, Sur les variations seculaires des elements elliptiques des sept plan'etes principales, J. de Math. s.1 5 (1840), 220-254. 30. P. Lucas and H. F. Ramirez-Ospina, Hypersurfaces in the Lorentz-Minkowski space satisfying$L_k{\Psi}\;=\;A{\Psi}+b$, Geom. Dedicata 153 (2011), 151-175. 31. P. Lucas and H. F. Ramirez-Ospina, Hypersurfaces in non-flat Lorentzian space forms satisfying$L_k{\Psi}\;=\;A{\Psi}+b$, Taiwanese J. Math. 16 (2012), no. 3, 1173-1203. 32. P. Lucas and H. F. Ramirez-Ospina, Hypersurfaces in pseudo-Euclidean spaces satisfying a linear condition on the linearized operator of a higher order mean curvature, Differential Geom. Appl. 31 (2013), no. 2, 175-189. 33. P. Lucas and H. F. Ramirez-Ospina, Hypersurfaces in non-flat pseudo-Riemannian space forms satisfying a linear condition in the linearized operator of a higher order mean curvature, Taiwanese J. Math. 17 (2013), no. 1, 15-45. 34. M. A. Magid, Lorentzian isoparametric hypersurfaces, Pacific J. Math. 118 (1985), no. 1, 165-197. 35. A. Mohammadpouri and S. M. B. Kashani, On some$L_{k}$-finite-type Euclidean hypersurfaces, ISRN Geometry 2012 (2012), article ID 591296, 23 pages. 36. H. Munzner, Isoparametrische hyperflachen in spharen. I and II, Math. Ann. 251 (1980), no. 1, 57-71 37. H. Munzner, Isoparametrische hyperflachen in spharen. I and II, Math. Ann. 256 (1981), no. 2, 215-232. 38. B. O'Neill, Semi-Riemannian Geometry With Applications to Relativity, Academic Press, New York London, 1983. 39. J. Park, Hypersurfaces satisfying the equation${\Delta}x\$ = Rx+ b, Proc. Amer. Math. Soc. 120 (1994), no. 1, 317-328. 40. R. Reilly, Variational properties of functions of the mean curvatures for hypersurfaces in space forms, J. Differential Geom. 8 (1973), 465-477.
2018-06-24 07:48:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 11, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7918004393577576, "perplexity": 693.0714879949719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866888.20/warc/CC-MAIN-20180624063553-20180624083553-00388.warc.gz"}
http://electricalacademia.com/page/30/
P Permeability Permeability is the measure of the ease, with which magnetic lines of force pass through a given material. Proton A positively charged particle with considerable mass. Phase rotation meter Phase rotation meter is a device used to determine the phase order of a three-phase electrical system. Parallel Circuit In … R Reluctance  The opposition that a magnetic circuit presents to the passage of magnetic lines through it. Reactance The opposition to the flow of an alternating current by an inductive or capacitive element in the circuit. Resistance Resistance is defined as the measure of opposition to the motion of electrons due … S SI unit SI units are most widely used units of measurement belong to an international system of units which is also known as SI or Metric System. Shunt A precision resistor connected in parallel with a current indicating meter for the purpose of bypassing a specific fraction of current around … T Time Invariance  In time invariant system, there are no changes in system structure as a function of a time t. Tesla The SI unit of magnetic flux density; equal to 1 Weber per square meter. Time Constant In a capacitor, the time required for a voltage to reach to 63.2 … U Under-excited The operating condition of a synchronous machine absorbing reactive power. V VFD Variable Frequency Drive – a technology where the rotational speed of an electric motor is controlled or varied electronically. VAR The unit for reactive power Q. Voltage Drop The voltage drop across a resistance is the product of current time’s resistance (IR). Voltage Source A two-terminal circuit element having … Factors Affecting Capacitance | Dielectric Constant There are three main factors affecting the capacitance of the capacitors that will be discussed in this tutorial in detail. The SI unit of capacitance is farad, named in honor of the English physicist and chemist Michael Faraday. The unit symbol for the farad is F. capacitance is the ability … Wire Gauge Sizes | Circular mils In order to compare the resistance and conductor sizes with each other, we need to establish a convenient unit. This unit is the mil-foot (mil-ft). A conductor will possess this unit size if having one mil (0.001 inches) diameter and a length of one foot. The standardized unit for wire … Types of Resistors Resistors can be classified into different types according to their construction.  Wire-wound resistors are made by wrapping high-resistance wire around an insulated cylinder, as illustrated in Figure 1. This type of resistor is generally used in circuits that carry high currents. Large wire-wound resistors are called power resistors and range in size from ½ … Nonlinear Resistors | Characteristics Curves of Nonlinear Devices In most circuits, we can assume that resistance is constant in relation to current and voltage. This linear relation can be graphically shown in figure 1. Fig.1: Plot of Linear Relation between Current and Voltage For example, if 3V is applied to a certain resistor and 1A flows, then 6V … How to Read Resistor Color Code | Resistor Color Bands Carbon resistors are color coded- that is, they have several color bands painted around the body near one end- to identify their ohmic values. Other types of resistors are not color-coded; instead, they have their ohmic values and, sometimes, identifying part numbers printed on them. The code has been established by … Resistor Power Rating | Power resistor The physical size of a resistor is not determined by its resistance but by how much power, or heat, it can dissipate. It electric circuits, the unit of power is the watt (W), named in honor of James Watt. One watt is the power dissipated when one ampere flows under … The opposition that a circuit offers to the flow of AC is called impedance. By measuring the voltage and current in an AC circuit and utilizing the following equation $Z={}^{V}/{}_{I}$ We can obtain the magnitude of circuit impedance. However, it is often desirable to separate impedance into resistive and reactive …
2019-08-25 03:29:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5466774106025696, "perplexity": 1118.995915209665}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322170.99/warc/CC-MAIN-20190825021120-20190825043120-00112.warc.gz"}
https://web2.0calc.com/questions/help_4883
+0 0 164 1 +1206 Rectangle ABCD is the base of pyramid PABCD. If AB = 8, BC = 4, $$\overline{PA}\perp \overline{AD}$$, $$\overline{PA}\perp \overline{AB},$$ and PB = 17, then what is the volume of PABCD? Jan 29, 2019 edited by Lightning  Jan 29, 2019 #1 +22884 +8 Rectangle ABCD is the base of pyramid PABCD. If AB = 8, BC = 4,$$\overline{PA}\perp \overline{AD}$$, $$\overline{PA}\perp \overline{AB}$$,  and PB = 17, then what is the volume of PABCD? Let h = height of the pyramid Let B = Area of the Base of the pyramid = Area of the rectangle ABCD = $$4\cdot 8$$ = $$32$$ $$\mathbf{h=\ ? }$$ $$\begin{array}{|rcll|} \hline h^2 + AB^2 &=& PB^2 \\ h^2 + 8^2 &=& 17^2 \\ h^2 &=& 17^2-8^2 \\ h^2 &=& 289-64 \\ h^2 &=& 225 \\ \mathbf{h} & \mathbf{=}& \mathbf{15} \\ \hline \end{array}$$ $$\begin{array}{|rcll|} \hline \text{Volume of PABCD} &=& \dfrac{1}{3}\cdot B \cdot h \\\\ &=& \dfrac{1}{3}\cdot 32 \cdot 15 \\\\ &=& 5\cdot 32 \\\\ \mathbf{\text{Volume of PABCD}} & \mathbf{=}& \mathbf{160} \\ \hline \end{array}$$ Jan 29, 2019
2019-08-22 06:23:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7720212936401367, "perplexity": 1837.9529529586116}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316783.70/warc/CC-MAIN-20190822042502-20190822064502-00196.warc.gz"}
http://www.mathworks.com/examples/matlab/community/20220-cf2call
MATLAB Examples # cf2call Compute call option prices from characteristic function. Part of the CFH Toolbox. Syntax [C K] = CF2CALL(CF) [C K] = CF2CALL(CF,AUX) Given a characteristic function CF, returns call option prices C and corresponding strikes K. Input Arguments The characteristic function CF should expect the real argument u and return the corresponding characteristic function AUX is a structure containing optional parameters for the Fourier transform. • aux.N denotes the number of points for FRFT evaluation, default 8192 • aux.uMax is the range of integration of the characteristic function, default 200 • aux.damp is the damping parameter required by the Carr/Madan approach, default 1.5 • aux.dx is the discretization of the log strike range, default value 2/N • aux.x0 contains the log of spot underlying, default zero • aux.K is a vector of strike evaluation points ## Example 1: Black Scholes In the Black Scholes model, the risk neutral dynamics of the logarithmic spot process are: resulting in the characteristic function where . The FFT option pricing approach of Carr/Madan rapipdly evaluates the option price as an Fourier transform of the underlying characteristic function, where the logarithm of the strike price: The characteristic function of the Black Scholes model is also included in cflib, using the argument type='BS'. rf = 0.05; tau = 1; sigma = 0.25; S0 = 100; x0 = log(S0); cfBS = @(u) exp(-rf*tau + i*u*x0 + i*u*(rf-1/2*sigma^2)*tau - 1/2*u.^2*sigma^2*tau); Within the discretely spaced strike range K=[40:160], we obtain K = [40:160]'; aux.K = K; aux.x0 = x0; CBS = cf2call(cfBS,aux); As a check, we test whether the option price obeys the no-arbitrage bound bounds = max(S0-K*exp(-rf*tau),0); plot(K,[bounds CBS]); title('Black Scholes option prices'); legend('Arbitrage bounds','Black Scholes prices'); xlim([80 120]); xlabel('Strike'); ylabel('Option Price'); ## Example 2: Heston's stochastic volatility model In Heston's stochastic volatility model, the risk neutral dynamics of the logarithmic spot process and the variance process are where The corresponding characteristic function is included in cflib using argument type='Heston'. Let us assume in addition to example 1 v0 = 0.25^2; kappaV = 0.85; thetaV = 0.30^2; sigmaV = 0.1; rho = -0.7; Translating this into the fields of the par structure required by cflib, we obtain par.x0 = x0; par.v0 = v0; par.rf = rf; par.q = 0; par.kappa = kappaV; par.theta = thetaV; par.sigma = sigmaV; par.rho = rho; aux.x0 = x0; cfHes = @(u) cflib(u,tau,par,'Heston'); CHes = cf2call(cfHes,aux); bounds = max(S0-aux.K*exp(-par.rf*tau),0); plot(K,[bounds CBS CHes]); title('Comparison of Heston and Black Scholes option prices'); legend('Arbitrage bounds','Black Scholes','Heston'); xlim([80 120]); xlabel('Strike'); ylabel('Option Price'); ## Example 3: Bates' model with stochastic intensity Here, we assume the spot asset volatility to be of the Heston type and that the spot asset jumps log-exponentially with stocahstic intensity. where , and The corresponding characteristic function can be recovered using cfaffine. Let us assume in addition to examples 1 and 2 lambda0 = 0.10; kappaL = 0.45; thetaL = 0.15; sigmaL = 0.1; muJ = -0.25; sigmaJ = 0.30; jump = @(c) exp(c(1,:)*muJ + 1/2*c(1,:).^2*sigmaJ^2); m = jump(1)-1; Transforming these parameters into the AJD coefficients required by cfaffine, we obtain X0 = [log(S0) ; v0 ; lambda0]; K0 = [rf ; kappaV*thetaV ; kappaL*thetaL]; K1 = [0 -1/2 -m ; 0 -kappaV 0 ; 0 0 -kappaL]; H1 = zeros(3,3,3); H1(:,:,2) = [1 rho*sigmaV 0 ; rho*sigmaV sigmaV^2 0 ; 0 0 0]; H1(3,3,3) = sigmaL^2; R0 = rf; L1 = [0 0 1]'; cfBates = @(u) cfaffine(u,X0,tau,K0,K1,[],H1,R0,[],[],L1,jump); [CBates] = cf2call(cfBates,aux); plot(K,[bounds CBS CHes CBates]); title('Comparison of SV/SJ, Heston and Black Scholes option prices'); legend('Arbitrage bounds','Black Scholes','Heston','SV/SJ'); xlim([80 120]); xlabel('Strike'); ylabel('Option Price'); ## Example 4 Option Greeks In this example, we will compute the greeks of options, i.e. change in the option price for a small change in an underlying variable. Let us begin with the of an option, which is given by A close look at the Carr/Madan option pricing formula from example 1 reveals that the derivative of the option price with respect to the underlying is $= \frac{\exp(-\alpha k)}{S\pi}\int_0^{\infty}\exp(-ivk) \frac{i*(v-(\alpha+1)i)*\phi(v-(\alpha+1)i)}{\alpha^2+\alpha-v^2+i(2\alpha+1)v}dv$ thus we can employ the call option pricing function cf2call to evaluate the option delta by simply handing a different characteristic function : cfDelta = @(u) exp(-x0)*i*u.*cfBS(u); Within the discretely spaced strike range K=[40:160], we obtain Delta = cf2call(cfDelta,aux); Just to make sure, compare the result with the theoretical plot(K,Delta,'ro',K,blsdelta(S0,K,rf,tau,sigma,0),'b'); In the same way, we can compute the option's , which is using the corresponding characteristic function: cfGamma = @(u) -exp(-2*x0)*(i*u+u.^2).*cfBS(u); Gamma = cf2call(cfGamma,aux); Just to make sure, compare the result with the theoretical plot(K,Gamma,'ro',K,blsgamma(S0,K,rf,tau,sigma,0),'b'); ## Example 5: Greeks of Bates Model with stochastic intensity Let us come back to example 3 above, where we assumed stochastic volatility and normally distributed return jumps with stochastic jump intensity. We are interested in the derivative of the option price with respect to • the spot price • the spot variance level • the spot intensity In the spirit of example 4, we note that all we have to do is to pre-multiply the characteristic function with that component that corresponds to our variable of interest. If we are interested in , we have to divide by to obtain the final greek. Here we require a simple function that returns the first derivatives of our characteristic function with respect to the spot levels: function out = cfTemp(cf,u,k) [out1, ~, out2] = cf(u); out = out1.*out2(k,:); end cfTemp knows that cf returns three outputs: the characterstic function cf(u) and the corresponding exponential constant the vector . See Theory for details. Let us now evaluate the resulting greeks: DeltaSBates = cf2call(@(u) cfTemp(cfBates,u,1),aux)/S0; DeltaVBates = cf2call(@(u) cfTemp(cfBates,u,2),aux); DeltaLBates = cf2call(@(u) cfTemp(cfBates,u,3),aux); subplot(3,1,1); plot(K,DeltaSBates); title('Bates Model \Delta'); subplot(3,1,2); plot(K,DeltaVBates); title('Bates Model Derivative with respect to the spot variance level'); subplot(3,1,3); plot(K,DeltaLBates); title('Bates Model Derivative with respect to the spot intensity level'); ## Example 6: Multiple strikes / maturities In this example, we show how to compute option prices for different strike-maturity combinations in one go. Assume that the underlying security is valued at 100 USD today, and we are interested in call option prices for the following strike-maturity set: Strikes\Maturity 1M 3M 6M 12M 24M 95 94 93 92 90 100 100 100 100 100 105 108 110 110 105 For our underlying process, we assume a Heston model with the parameters from Example 2. For the time to maturity and the strikes we introduce the following arrays: tau = [1 3 6 12 24]/12; K = [95 94 93 92 90 ; 100 100 100 100 100 ; 105 108 110 110 105]; Using the additional argument K in the aux structure of cf2call, we can compute all option prices in one go: C = cf2call(@(u) cflib(u,tau,par,'Heston'), ... struct('x0',par.x0,'K',K)) C = 6.3311 9.3630 12.6840 17.5476 25.3953 3.1068 5.7038 8.5326 12.9880 20.0154 1.1938 2.4975 4.3447 8.5270 17.6624
2018-03-21 07:11:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8307629227638245, "perplexity": 4852.838249683538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647584.56/warc/CC-MAIN-20180321063114-20180321083114-00646.warc.gz"}
https://mathematica.stackexchange.com/questions/101473/get-evaluate-file-name
# Get (<<) Evaluate File Name This seems like a simple question due to lack of experience, but I can't seem to find an answer through searching. I am trying to read back a symbol from a DumpSave: DumpSave[NotebookDirectory[] <> "mySymbol.mx", mySymbol] Here, using NotebookDirectory[] got my file in the current directory like I wanted, but when I try to use either of the following with Get, it fails: << NotebookDirectory[] <> "DiscreteTreatmentRegion.mx"; << Evaluate[NotebookDirectory[] <> "Mysymbol.mx"]; It seems to me like it is not evaluating the filename correctly, since since it gives the message 'Cannot open ("NotebookDirectory[]").' How can I get this to work? Or is there a better way to do what I am trying to do. • This << (NotebookDirectory[] <> "defs.mx") works for me in a similar situation. Try closing MMA, then loading the notebook in question and running your second line again. Obviously the notebook needs to be saved. – LLlAMnYP Dec 8 '15 at 10:13 • StringJoin (<>) binds more loosely than Get (<<), it appears. the first example is interpreted as Get[NotebookDirectory[]] <> "file.mx", but the second line with Evaluate should work properly. – LLlAMnYP Dec 8 '15 at 10:17 << is one of a few special operators which turn everything that follows into a string, without having to use quotes. Thus << asd is just another notation for Get["asd"]. Note that the first form has no quotation marks. Illustration: The answer to your problem is: use the Get form, and not <<, if the file name is computed as an expression. << is a convenient shorthand for when you type the file name directly. Some other stringifying operators are >>, >>>, ::, and since version 10, #. #asd is the same as Slot["asd"]. This is documented under the "File Names" section here. • I was trying to find good reference in docs for that. Are you aware of any? ps. NotebookDirectory[] was eveluate in my case but failed to get the data anyway. – Kuba Dec 8 '15 at 11:30 • @Kuba There's a mention under Details for Get: <<"name" is equivalent to <<name. The double quotes can be omitted if the name is of the form specified in "Operator Input Forms". – Szabolcs Dec 8 '15 at 11:31 • I saw that but I wouldn't say it is informative. – Kuba Dec 8 '15 at 11:32 • I see now, hmm I complain to much on docs so let me just go to say nothing more. – Kuba Dec 8 '15 at 11:35 • @Kuba I agree. I added a reference which is a bit better. It mentions which characters can terminate the string (file name). I don't know if # is different, it probably is. – Szabolcs Dec 8 '15 at 11:35
2019-09-20 15:06:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5712260603904724, "perplexity": 1855.574343879494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574039.24/warc/CC-MAIN-20190920134548-20190920160548-00339.warc.gz"}
https://dsp.stackexchange.com/questions/61900/complex-signal-sine-reconstruction
Complex signal sine reconstruction Noob here, I read that any signal can be made by putting together sines and cosines, it always shows some kind of basic harmonic wave with constant amplitude such as square wave. I understand that constant amplitude, constant frequency square and triangle waves can be made from sines, but what about square wave that sweeps its frequency or even suddenly jump to other frequency, what about square that has its amplitude changed? What about noise, lets say I have 1 minute long 48KHz 16bit PCM signal that is just white noise, that can be reconstructed from bunch of sines too? And what about transients, either the sudden square wave like ones or the gentle slow fade in type ones. Lets say I have signal that is silence and then square wave slowly and smoothly rises in amplitude, how can sines, which are constant in amplitude ever recreate it? Basically, my point is these sines are constant in amplitude, frequency, and phase and they run the entire lenght of whatever signal we want to reconstruct, signals can have silence, transients, periods where amplitude and frequency is constant and periods where they change. I dont understand how can bunch of sines that run the entire lenght of complex signal ever reconstruct it. How can 1000 sine waves sum up into perfect silence and then suddenly sum into noisy square wave sweep? • Hi: slutsky-yule effect says that you take can noise, perform transformations ( e.g.; calculate moving sums of the noise ) and get cyclic behavior ( sines to some extent ). So, I imagine it's possible to go the other way around also. Check out Yule-Slutsky because it seems that you might find it interesting. – mark leeds Nov 13 at 15:03 A sum of sines/cosines (with frequencies that are integer multiples of a fundamental frequency) will always result in a periodic function (that's what Fourier series are about). But you can of course approximate any function with finite support by choosing the length of the function's support as your period (if you don't mind that the approximation will be a periodic continuation of the given function). Note that the approximation by a Fourier series is a least squares approximation, not a point-wise approximation. So if the desired function has discontinuities there will always remain some error of fixed magnitude, no matter how many sinusoids you use for the approximation. This oscillatory behavior near discontinuities is called Gibbs phenomenon. Big question. Joseph Fourier proposed this series circa 1807-1822, although there were many earlier predecessor developments. Historically, it took several following decades in the development of mathematical analysis to prove that the Fourier series actually converged. See: https://en.wikipedia.org/wiki/Convergence_of_Fourier_series and https://en.wikipedia.org/wiki/Fourier_series#Convergence Thus, a complete answer to your question might be several university level textbooks. For PCM waveforms, the discrete form of Fourier "deconstruction" is a DFT. Now a DFT is just a (complex arithmetic) square matrix basis transform. Turns out all the sines and cosines (or the equivalent complex exponentials) make up a full orthogonal basis set for the matrix transform. Thus, any data vector (valid quantities, no NaNs) can be multiplied by this type of square transform matrix, and result in another vector, representing sines and cosines (or complex exponentials). That's just a property of square matrix multiplication (given that basis set). This square matrix transform has a proper inverse. Use the inverse transform, and you get your original arbitrary waveform (noise, impulses, steps, etc.) right back (minus numerical noise) from the sines and cosines representation. What else would an inverse matrix transform do? Thus, for PCM data, it's just a matter of understanding linear algebra (with perhaps a bit of trig and complex number stuff mixed in). • What does convergence in terms of DFT mean? – Sweeper Nov 14 at 1:52 • The sines and cosines as components of the basis vectors produce a non-degenerate square matrix. A degenerate DXT matrix would not have a proper inverse, and thus a IDXT(DXT()) would not be able to recreate (converge on) the original input, as does a IDFT(DFT()) – hotpaw2 Nov 14 at 4:38
2019-12-05 20:46:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.835811972618103, "perplexity": 977.8787868519123}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482038.36/warc/CC-MAIN-20191205190939-20191205214939-00397.warc.gz"}
https://ai.meta.stackexchange.com/questions/1165/accepting-nominations-who-should-moderate-this-site/1166
# Accepting Nominations — Who should moderate this site? [duplicate] Ideally Moderators are elected by the community, but until the community is large enough to hold a proper election, we will be appointing three provisional Moderators to fill those roles. We need your help. Please nominate folks you would like to see become provisional moderators for this site. Your input will provide valuable insight to help us make our selections. You can read more about the process here: Moderators Pro Tempore. ## The Nomination Process: • Nominate a user by posting an 'answer' below. Each nomination should be a separate answer. Use the template at the bottom of this post to complete your nomination. • Self nominations are encouraged. This is a volunteer activity, so users should not feel obligated to accept these positions. A self-nomination is simply a way to say, "I am very much interested in this, so let my record speak for itself." • Tell us about the candidates. Nominations can include links to other activities like Area 51 participation, participation in other sites, or any relevant thoughts/links that may help us make an informed decision. • Nominee should indicate their acceptance by editing the answer to accept/decline the nomination. Nominees: please ensure your profile email is correct so we can contact you. Optionally, you are encouraged to write a bit about yourself following your acceptance. I accept/decline this nomination. Hi, I am name/location/fun fact (all optional). I live in <location>, so I am generally active on this site from <time> to <time>. Some other things you may want to know about me are… ## Here is what we'll be looking for in a Moderator candidate: We are looking for members who are deeply engaged in the community's development; members who: • Have been consistently active during the earliest weeks of this site's creation • Show an interest in their meta's community-building activities • Lead by example, showing patience and respect for their fellow community members in everything they write • Exhibit those intangible traits discussed in A Theory of Moderation ## Nomination Template To nominate a candidate, copy and paste the text below as an answer and complete your nomination writeup: <a href="http://ai.stackexchange.com/users/UserID"> <img src="http://ai.stackexchange.com/users/flair/UserID.png"></a> <a href="http://meta.ai.stackexchange.com/users/UserID"> <img src="http://meta.ai.stackexchange.com/users/flair/UserID.png"></a> ###Notes: This nominee would be a good choice because … ### Notes: Currently most voted and dedicated user with the relevant knowledge and skills about AI. In addition, he's working in this research area, so he knows what he's talking about. His skills may help to improve quality of this site. EDIT by NietzscheanAI (formerly known as user217281728): Most kind, thanks. I'm happy to accept this nomination and want to work to make this a informative and useful site. I live in the UK, so tend to be active on the site between 07.00 and 23.00 GMT. My varied career has included games software company owner, generative music developer, software architect, pure mathematician and (for the last 13 years) AI researcher. • Wow, 64 answers (and counting, hopefully). We have a topic expert right here. – Ben N Aug 16 '16 at 19:47 • I edited the nomination to include the nominee's network-wide flair. – wythagoras Aug 17 '16 at 7:41 • @user217281728 Are you from London? – kenorb Aug 18 '16 at 12:48 • @kenorb No, I'm not. – NietzscheanAI Aug 18 '16 at 12:51 • @user217281728 Get a fancier username maybe? – user1578 Aug 21 '16 at 16:52 • @Rahul2001 - better? – NietzscheanAI Aug 22 '16 at 19:12 • @NietzscheanAI YES! – user1578 Aug 23 '16 at 1:56 • 'e' constant was fancier I think:) – kenorb Aug 23 '16 at 2:01 • @NietzscheanAI just a note: I'm glad you're not a randomly anonymous who-knows-what-he-is-doing AI enthusiast from Russia or whatever :) I mean, because of the lack of information on your profile, sometimes I was playing with such thoughts, but of course it was not a serious issue, but rather some joke in my head. – Zoltán Schmidt Sep 2 '16 at 18:11 • Dear NietzscheanAI, considering the votes, I think we can welcome you in the mod club. As you finally get the diamond, please don't forget: as the site matures and grows older, its standards and customs tend to differ more and more from the common sense, and it results a growing ratio of disappointed newbies, leaving the site after their closed first questions. Please pay attention for that! Many SE sites has fallen already in this trap. Thanks! – peterh - Reinstate Monica Sep 7 '16 at 21:27 • @peterh - I'm not an enthusiastic closer of questions. – NietzscheanAI Sep 9 '16 at 8:18 ### Notes: The second most voted and active user, data scientist with the right skillset across different AI branches. His answers are reliable and interesting. His skills can be a great asset to improve quality of this site. EDIT by Matthew Graves: Thanks for the nomination! I'm pleased to accept it. I'm interested in helping this site help people better understand AI and the issues surrounding it, both through direct effort and community building. I've been clearing out review queues here as soon as I got access to them, and that's typically the first thing I check after my comment inbox. I'm currently in Austin, Texas, and so would typically be online from to about noon to 2am UTC. I've been doing machine-learning related work for, depending on how you count it, about 8 years now, mostly as a student but now also as a data scientist. My research effort has mostly been in numerical optimization, machine reliability, and time series analysis, rounded out by my personal interests in psychology, economics, and philosophy. I've been interested in intelligence for as long as I can remember, and that grew to encompass artificial intelligence as soon as I was introduced to it. To a large degree I 'grew up on the internet'; forum-posting has been a major hobby for over half of my life at this point. I've consistently had a reputation for being polite, calm, and open-minded; qualities that I hope would serve me well as a moderator. • I edited the nomination to include the nominee's network-wide flair. – wythagoras Aug 17 '16 at 7:43 ### Notes: This nominee would be a good choice because of his active involvement in the community's development during the private beta and his experience on Stack Exchange! I'll step right up and offer my services to the community as a moderator pro tempore. I confess that I'm just an enthusiast when it comes to artificial intelligence, but I have been highly active here on meta, gaining the community's first silver badge: Convention. I thoroughly enjoy reviewing and I have been working the queues since the site's beginning. I've also spent a large (probably unhealthy, heh) amount of time reading Meta Stack Exchange and the SE blogs, so I'm familiar with the Stack Exchange model, the software, and the expectations for the various roles. I'm also active on Meta Super User, for what it's worth. I live in Illinois (midwestern United States), so I'm usually awake from UTC 15:00 to 3:00. You can read about the things I've created in my profile. I have a blog on which I mentioned the site a while back. I've been doing what I can to make sure this site survives, and that has required casting a few close votes. Hopefully I haven't come off as too much of a maniacal ruthless reviewer :). When asked on meta, in comments, or in chat about why a question is closed, I always write up a helpful, respectful explanation. If I ever do something you think is less than ideal, please feel free to ask me about it! Like all humans (though perhaps not AIs!) I make the occasional mistake, and when I see that's happened, I make it right. I have my own opinions and judgments, of course, but I would be happy to carry out as moderator pro tempore the consensus of the community, the mod team, and Stack Exchange. We're all in this together. It's a pleasure building this community with everyone here. I look forward to continuing to the next stage of site growth with y'all! • Full disclosure: it looks like one of the people from Super User's Root Access chat room joined the site and only upvoted this. I had linked the site's Area 51 proposal to drum up traffic for the site, not votes for myself. I apologize for the slight score inflation. On the plus side, we do have a handful of new users from SU looking around our main! – Ben N Aug 22 '16 at 13:55 • ^ That comment alone speaks volumes. Diamond this man. (Also active on main meta, which is worth having in a mod.) – ArtOfCode Aug 23 '16 at 22:36 • Would you mind cleaning up this thread to fix the broken flairs? (I don't have edit privs, and you're like the only one I know still hangs out on Meta...) – Mithical Apr 4 '17 at 22:11 ### Notes: I would like to offer my services as a pro-tem moderator on this site. I have watched been a relatively active member since I joined on Day 0. I have 135 edits (counting tag-only edits), I was the first one to earn the Strunk and White badge, I am the top reviewer for both Close Votes and Reopen Votes on the main site, I was the first reviewer of Late Answers, and I was the first reviewer on Meta. I have watched Meta, and pitched in when I could. I was also one of 25 users to earn the Beta badge, which means that I was an active user in the Private Beta. I now also have the Convention badge, which means that I've been active here on Meta. I may not know so much about AI, really, but I do know enough to be able to tell if something answers the question or not, I think. :) Also, I am one of the only users who has ventured onto chat :P I am also active on this Meta, the Puzzling Meta, and the main Meta*. I am fairly well-versed in the content in the Help Center and site policy, as well. * Okay, I mostly flag things as off-topic. But I have asked/answered some! I'm a 14 year-old kid. The only moderation experience I have is being an admin on 3 Wikias. (Not popular ones - little outdated backwater ones. :P) I live in the UTC+2/3 time zone, although I'm often on late. I don't go to school; I'm homeschooled. I am not a programmer. I have been using SE for a year and 11 months, roughly, so I have a pretty good idea about how the site works :P. • This is how I know this guy: Edited by Mithrandir. I keep seeing it on multiple SE sites. – Cem Kalyoncu Aug 23 '16 at 18:21 • @CemKalyoncu :P I like editing. – Mithical Aug 23 '16 at 19:48 • I told ya, someone is on editing spree ;D – ABcDexter Sep 3 '16 at 20:11 • Yes, but he has it as CM & SE employee. These rules bind his hands: he can't do elected mod tasks, his role is essentially an over-mod exception handler. Being an elected mod, this rules wouldn't bind his hands. – peterh - Reinstate Monica Sep 7 '16 at 18:57 I'll volunteer myself. ### Notes: This nominee would be a good choice because - he is passionate about AI and its potential applications for improving the human condition. This nominee is also a strong supporter of open exchange of scientific knowledge and technology, as expressed in the Open Source, Open Web, Open Data, Open Science and Open Hardware initiatives. This nominee has been participating in multiple Stack Exchange communities for many years. You could consider this nominee to be the "ruthless NON closer" as he believes that closing questions is generally harmful to the community, as it is perceived as an aggressive and hostile act by whoever posted the question. This nominee believes that "bad" questions can simply be down-voted and allowed to die from lack of activity in almost all cases. This nominee believes we can strike a balance between being "beginner friendly" and still keeping things interesting enough to attract experts, but believes that it will take some time to establish our presence in the AI world and attract the high-level researchers and others of that ilk. Since I volunteered myself, it should go without saying that I accept this nomination. Hi, I am Phillip. I live in Chapel Hill, NC, so I am generally active on this site from around 10:00am through 1:00am Eastern time. Some other things you may want to know about me are: I am founder / president at Fogbeam Labs, an open source software company. I was a volunteer firefighter for many years and was Assistant Fire Chief of my department for the last couple of years I was there. I am the founder/organizer of the Research Triangle Park "Semantic Web / Artificial Intelligence / Machine Learning" Meetup here in the Raleigh/Durham area. I'm also active on Github and Hacker News. • I edited the nomination to include your meta flair, and make the flair actually match the link it goes to. – wythagoras Aug 17 '16 at 7:42 ### Notes: This nominee would be a good choice because kenorb is a very active user, a person who knows a lot about AI, and, I feel, cares about helping this community grow. kenorb should be one of our moderators - even if his English isn't perfect, I still think he's perfect mod material. :) First of all, I would like to thank you for nomination and I am pleased to take the responsibility of being a pro tempore mod. I believe that this site has a unique opportunity to make a huge impact to global technology market driven by artificial intelligence and our everyday life in the very near future by sharing advanced knowledge accessible for all. I have been using SE for over 7 years, I am experienced across a variety of fields and I am familiar with moderation tools and I understand their purpose. I am an experienced software engineer specialising in a variety of information technology stacks with over 18 years experience consulting across a range of sectors and multination companies. One of the recent one is planning to 'to deploy drone army' worldwide which can expand our scope of understanding of artificial intelligence (e.g. imagine flying drones in the restaurant and delivering your food to your table after pressing a single button). Check also my user CV profile. My first AI program was a chat bot written over 18 years ago in Pascal with custom written assembler libraries in order to make my school mates believing that they are chatting on IRC with real people, while being on the computers without any internet connection, so other can play games on spare computers with the real network. This worked, for the first 15-30 minutes, later on they could find out that something was wrong or get bored. Second project was involved AI bots protecting IRC channels. I did some AI in games. Since then I am interested in practical applications of AI. This is my long term hobby and interest. Further projects required more sophisticated requirements. Currently I am working on integration AI with the financial algorithms and systems. I am good team player, so I am able to cooperate with other mods, I'm also available on daily basis (GMT/DST time). I hope we can improve this site by keeping it away from chaos, spam and trolls, to provide high quality site. • I edited the nomination to include the nominee's network-wide flair. – wythagoras Aug 17 '16 at 7:44 • I personally think Kenorb is the best one for this. But, the community's response made me scratch my head :( – Dawny33 Aug 22 '16 at 5:53 • I could guess that some science people didn't like the general approach of this AI site treating it as a competition hoping it'll fail again in favor of Stats.SE, CS.SE or DataScience.SE, so negativity was expected, therefore I didn't expect to have a lot of fans. I think every success require some conflicts, sacrifices and hard work, so I'm really happy that this site went beta after 6 years of constant failures. I hope this will serve humanity for the next 100+ years to keep up with so much changing world and sharing the advanced knowledge to everyone. – kenorb Aug 23 '16 at 0:39 • @kenorb I doubt it's because "science people didn't like you". More likely, I'm afraid to say, that the community doesn't think you're mod material. – ArtOfCode Aug 23 '16 at 22:38 • @ArtOfCode Thanks for your honest opinion:) Any idea about specifics? – kenorb Aug 23 '16 at 22:57 • @kenorb You had said: I am >7 year old SE user experienced across variety of fields and I am familiar with moderation tools and understanding their purpose – Mithical Aug 25 '16 at 13:18 • @Mithrandir True, my EN is still bad despite few Cambridge-FCE-like certificates (theory != practice), secondly usually I'm in hurry when writing and all the time distracted couple of times, so I had to re-read everything what I wrote couple of times, but even this doesn't help:) But not everybody is so pedantic to the English. So I don't think that's the reason, usually people just doesn't like me, since I tend to draw a lot of controversies whatever I do while pushing the things forward:) – kenorb Aug 25 '16 at 13:30 • What the hell is with all the downvotes?! – Zoltán Schmidt Sep 2 '16 at 18:12 • Why the downvotes o.O – ABcDexter Sep 3 '16 at 20:01 • @ABcDexter I honestly don't know. – Mithical Sep 3 '16 at 20:02 • Yes, if someone doesn't like the nomination, say it openly. One must be honest, just like ArtofCode explained... – ABcDexter Sep 3 '16 at 20:05 • @Mithrandir Actually i read everything, but am not sure about anyone. If mod pro tempore is chosen from a set of experts, it would be great ^_^ – ABcDexter Sep 3 '16 at 20:09 • I have downvoted this nomination because while kenorb seems to claim he has worked for a lot of years in the AI field, his questions and answers on this site appear to be based on very shaky ground and sometimes are borderline cranky. Moreover I have seen kenorb repeatedly use very dubious sources to justify his claims on other SE sites. His line of defence here is "Science people don't like me" which again sounds very cranky. – Fatalize Sep 5 '16 at 7:38 • @Fatalize This is how I work, questioning everything and seeing everything from the different point of view. My approach is to analyse concepts based on my logical conclusions given the available data (that is not by repeating what the mainstream or other says) and I see nothing wrong with it. This is how the real science work, building and organising knowledge, not assuming that a single body-of-knowledge has the ultimate truth. If we would believe in mainstream all the time, we'll still live on the flat Earth. Although the A.I. science is very simple, it's testable and it needs to work. – kenorb Sep 5 '16 at 10:15 • @kenorb Just visited this site for the first time. Net score was 0, so I upvoted you. – user1271772 Jul 18 at 17:43 ### Notes: While I'm not the most knowledgeable about AI, and don't have the highest reputation level, I know a lot about moderating. In the past three days, every single day, I've cleared all the review ques I have access to. I currently own two organizations, and I moderate, or lead, both of them. I have 2 pending proposals on Area51. I'm active on the Stack Exchange sites almost every single day. I have former experience from moderating as a former FPC from Scratch. It would be an honor to be a moderator on this site. Thank you for reading. • " I'm not the most knowledgeable about AI". How much do you know? – ABcDexter Sep 3 '16 at 20:14 • @ABcDexter About what? AI is a rather large subject. You could be talking about neural networks, the history/origin of AI, etc. – baranskistad Sep 3 '16 at 22:50
2020-09-19 03:32:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20526696741580963, "perplexity": 2391.695240802596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189928.2/warc/CC-MAIN-20200919013135-20200919043135-00115.warc.gz"}
https://plainmath.net/93084/find-a-formula-for-the-exponential-funct
# Find a formula for the exponential function passing through the points (-2, 1) and (3, 32). Find a formula for the exponential function passing through the points (-2, 1) and (3, 32). You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it domino671v Let the exponential function be $y=a{b}^{x}$ At At Now, $\frac{a{b}^{3}}{a{b}^{-2}}=\frac{32}{1}\phantom{\rule{0ex}{0ex}}⇒{b}^{5}=32={2}^{5}\phantom{\rule{0ex}{0ex}}⇒b=2\phantom{\rule{0ex}{0ex}}\therefore a=\frac{32}{{b}^{3}}=\frac{32}{8}=4$ Hence the exponential function is $y={4.2}^{x}$
2022-12-06 07:22:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 28, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7895877361297607, "perplexity": 568.5061651727893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711074.68/warc/CC-MAIN-20221206060908-20221206090908-00332.warc.gz"}
http://debasishg.blogspot.com/2007/11/infinite-streams-using-java-closures.html
## Friday, November 09, 2007 ### Infinite Streams using Java Closures Neal Gafter's prototype of the closures implementation in Java has given us enough playground to fool around with. Of late, I have been dabbling around with a couple of idioms in functional programming trying to implement it in Java. Many of them have already been tried using functors, anonymous inner classes and the likes. Many of them work too, but at the cost of high accidental complexity. The following is an attempt to get a clear implementation of infinite streams in Java. Infinite streams give you the illusion that it can contain infinite number of objects. The real kludge behind infinite streams is lazy evaluation. SICP introduces the term delayed evaluation, which enables us to represent very large (even infinite) sequences as streams. Functional languages like Haskell and Miranda employs laziness as the default paradigm of evaluation, while languages like Scheme implement the same concepts as library functions (delay and force). Dominik Gruntz implements infinite streams in Java using the functor paradigm and inner classes. The obvious problem is verbosity resulting from the accidental complexity that they lend to the implementation. In this post, I attempt the same using Neal's closures prototype. So, without further ado .. The Stream Interface Here's the contract for lazy evaluation .. class StreamTest {  interface Stream<E> {    E car();    Stream<E> cdr();    E get(int index);    <R> Stream<R> map(Unary<? super E, R> f);    Stream<E> filter(Unary<? super E, Boolean> f);  }  //..} and a lazy implementation using Java closures .. class StreamTest {  interface Stream<E> {    //.. as above  }  static class LazyStream<E> implements Stream<E> {    private E car;    private {=>Stream<E>} cdr;    // constructor    public LazyStream(E car, {=>Stream<E>} cdr) {      this.car = car;      this.cdr = cdr;    }    // accessors    public E car() { return car; }    public Stream<E> cdr() { return cdr.invoke(); }    // access at position    public E get(int index) {      Stream<E> stream = this;      while (index-- > 0) {      stream = stream.cdr();    }      return stream.car();    }    // map over the stream    public <R> Stream<R> map(Unary<? super E, R> f) {      return cons(f.invoke(car), {=>cdr().map(f)});    }    // filter the stream    public Stream<E> filter(Unary<? super E, Boolean> f) {      if (car() != null) {        if (f.invoke(car()) == true) {          return cons(car(), {=>cdr().filter(f)});        } else {          return cdr().filter(f);        }      }      return null;    }    // factory method cons    public static <E> LazyStream<E> cons(E val, {=>Stream<E>} c) {      return new LazyStream<E>(val, c);    }  }} A couple of points bugging .. I had to make up the Unary class since the closure did not allow me to specify ? super E in the left hand side. Ricky has clarified with Neal that this is due to the fact that things on the left hand side of a closure automatically have ? super in their types. Hence a little noise .. static class Unary<T,R> {  private {T=>R} u;  public Unary({T=>R} u) {    this.u = u;  }  public R invoke(T arg) {    return u.invoke(arg);  }} and now some tests .. class StreamTest {  //.. all above stuff  //.. and the tests  // helper function generating sequence of natural numbers  static LazyStream<Integer> integersFrom(final int start) {    return LazyStream.cons(start, {=>integersFrom(start+1)});  }  // helper function for generating fibonacci sequence  static LazyStream<Integer> fibonacci(final int a, final int b) {    return LazyStream.cons(a, {=>fibonacci(b, a+b)});  }  public static void main(String[] args) {    // natural numbers    Stream<Integer> integers = integersFrom(0);    Stream<Integer> s = integers;    for(int i=0; i<20; i++) {      System.out.print(s.car() + " ");      s = s.cdr();    }    System.out.println("...");    // a map example over the stream    Stream<Integer> t = integers;    Stream<Integer> u = t.map(new Unary<Integer, Integer>({Integer i=>i*i}));    for(int i=0; i<20; i++) {      System.out.print(u.car() + " ");      u = u.cdr();    }    System.out.println("...");    // a filter over stream    Stream<Integer> x = integers;    Stream<Integer> y = x.filter(new Unary<Integer, Boolean>({Integer i=>i%2==0}));    for(int i=0; i<20; i++) {      System.out.print(y.car() + " ");      y = y.cdr();    }    System.out.println("...");  }} Closures in Java will surely bring in a new paradigm of programming within the developers. The amount of excitement that the prototype has already generated is phenomenal. It'll be too bad if they do not appear in Java 7. Update: Ricky Clarkson points out in the Comments that {? super E=>? extends R} is the same as {E=>R}. The covariance / contravariance stuff just blew off my mind when I compiled the post. Hence the Unary class is not required. Just remove the class and substitute the closure in map() and filter(). e.g. public <R> Stream<R> map({E=>R} f) {  return cons(f.invoke(car), {=>cdr().map(f)});} Ricky Clarkson said... I'm not sure that Unary is necessary, can you put all the code together so that I can play with it (one or multiple files is fine)? I've just been covering that sicp chapter, so this is interesting. Debasish said... @Ricky: Here is the stuff in a single file .. http://docs.google.com/Doc?id=drm7v5q_11gv4m4p .. I would also love to get rid of Unary. Ricky Clarkson said... And here's my 'answer': http://pastebin.com/f5e4fd1ab In short, because {A=>B} can be read as {? super A=>? extends B}, you don't need to add it yourself. All I did was delete Unary and replace it with straight {A=>B}. Perhaps I missed something, but the code compiles and runs fine. If I missed something, add a test case that fails and I'll try again. Debasish said... Silly me ! It just blew off me that {? super E=>? extends R} is the same as {E=>R}. Thanks for reminding me. I would have required the indirection in case I had an extends on the left hand side. I am not changing the post - just adding an Update on the changes. Thanks for the comment. Prashant Jalasutram said... Good post debasish. But can you please help me out most of my programs in closures won't run in windowsXP? I always get C:\closures\test\tools\javac\closures>java -Xbootclasspath/p:c:/closures/lib/javac.jar StreamTest Exception in thread "main" java.lang.NoClassDefFoundError: javax/lang/function/OO C:\closures\test\tools\javac\closures> I could manage only very few closure examples to run. Thanks Prashant jalasutram http://prashantjalasutram.blogspot.com/ Ricky Clarkson said... Prashant: What command are you using to compile? If you're not specifying the classpath on that command, what value does %CLASSPATH% have? When you compile, some classes are created. For me, a javax/ directory appears in the same directory my .class file appears in (assuming no package statement in the source). You'll need to make sure that the directory above javax/ is on the classpath. I think this is only a prototype issue, and that in a release the types will be generated by the VM as needed, much as array types are. Prashant Jalasutram said... Ricky, I cannot see any folders getting created when it compiles successfully. Command i am using: C:\closures\test\tools\javac\closures> javac -J-Xbootclasspath/p:c:/closures/lib/javac.jar -source 7 Demo.java and then i try to run but fail almost all the times like java -Xbootclasspath/p:c:/closures/lib/javac.jar Demo Thanks Prashant Prashant Jalasutram said... Ricky, And value of %Classpath% is C:\closures\test\tools\javac\closures>set classpath CLASSPATH=C:\Program Files\Java\jdk1.6.0\lib;.; C:\closures\test\tools\javac\closures> Thanks Prashant Prashant Jalasutram said... Ricky, Thanks a lot for your gr8 tip and yes it worked finally and i am very happy that i can try a lot of examples now. It worked when i added "-d ." which allowed as you suggested to create a new directory and added OO class. so finally my javac looks like javac -d . -J-Xbootclasspath/p:c:/closures/lib/javac.jar -source 7 *.java and running in XP does not change any thing. Debasish thanks a lot for allowing to act as mediator pattern between me and ricky to solve this :-) Thanks Prashant Jalasutram Debasish said... Hi Prashant - It is good to find that ur problems have been fixed. I just now logged in and found the trail from Prashant. Thanks Ricky for all the help. Closures indeed provide great power of abstractions. I will be extremely disappointed if we miss it out in Java 7. Cheers. plush said... photo soft said... said... said... said... said... Anonymous said... Anonymous said... Anonymous said... moto said... said... said... said... said... FX・外国為替証拠金取引の比較サイト「FX-外為比較.com」では、複数の条件からFX・外国為替会社の比較!また資料請求、口座開設もできます。 said... said... said... Anonymous said... Anonymous said...
2018-10-22 22:34:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47582295536994934, "perplexity": 285.4889565478774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515555.58/warc/CC-MAIN-20181022222133-20181023003633-00376.warc.gz"}
https://dp.tdhopper.com/nonparametric-lda/
# Nonparametric Latent Dirichlet Allocation In [1]: %matplotlib inline %precision 2 Out[1]: u'%.2f' ## Nonparametric Latent Dirichlet Allocation¶ Latent Dirichlet Allocation is a generative model for topic modeling. Given a collection of documents, an LDA inference algorithm attempts to determined (in an unsupervised manner) the topics discussed in the documents. It makes the assumption that each document is generated by a probability model, and, when doing inference, we try to find the parameters that best fit the model (as well as unseen/latent variables generated by the model). If you are unfamiliar with LDA, Edwin Chen has a friendly introduction you should read. Because LDA is a generative model, we can simulate the construction of documents by forward-sampling from the model. The generative algorithm is as follows (following Heinrich): • for each topic $k\in [1,K]$ do • sample term distribution for topic $\overrightarrow \phi_k \sim \text{Dir}(\overrightarrow \beta)$ • for each document $m\in [1, M]$ do • sample topic distribution for document $\overrightarrow\theta_m\sim \text{Dir}(\overrightarrow\alpha)$ • sample document length $N_m\sim\text{Pois}(\xi)$ • for all words $n\in [1, N_m]$ in document $m$ do • sample topic index $z_{m,n}\sim\text{Mult}(\overrightarrow\theta_m)$ • sample term for word $w_{m,n}\sim\text{Mult}(\overrightarrow\phi_{z_{m,n}})$ You can implement this with a little bit of code and start to simulate documents. In LDA, we assume each word in the document is generated by a two-step process: 1. Sample a topic from the topic distribution for the document. 2. Sample a word from the term distribution from the topic. When we fit the LDA model to a given text corpus with an inference algorithm, our primary objective is to find the set of topic distributions $\underline \Theta$, term distributions $\underline \Phi$ that generated the documents, and latent topic indices $z_{m,n}$ for each word. To run the generative model, we need to specify each of these parameters: In [2]: vocabulary = ['see', 'spot', 'run'] num_terms = len(vocabulary) num_topics = 2 # K num_documents = 5 # M mean_document_length = 5 # xi term_dirichlet_parameter = 1 # beta topic_dirichlet_parameter = 1 # alpha The term distribution vector $\underline\Phi$ is a collection of samples from a Dirichlet distribution. This describes how our 3 terms are distributed across each of the two topics. In [3]: from scipy.stats import dirichlet, poisson from numpy import round from collections import defaultdict from random import choice as stl_choice In [4]: term_dirichlet_vector = num_terms * [term_dirichlet_parameter] term_distributions = dirichlet(term_dirichlet_vector, 2).rvs(size=num_topics) print(term_distributions) [[ 0.41 0.02 0.57] [ 0.38 0.36 0.26]] Each document corresponds to a categorical distribution across this distribution of topics (in this case, a 2-dimensional categorical distribution). This categorical distribution is a distribution of distributions; we could look at it as a Dirichlet process! The base base distribution of our Dirichlet process is a uniform distribution of topics (remember, topics are term distributions). In [5]: base_distribution = lambda: stl_choice(term_distributions) # A sample from base_distribution is a distribution over terms # Each of our two topics has equal probability from collections import Counter for topic, count in Counter([tuple(base_distribution()) for _ in range(10000)]).most_common(): print("count:", count, "topic:", [round(prob, 2) for prob in topic]) count: 5066 topic: [0.40999999999999998, 0.02, 0.56999999999999995] count: 4934 topic: [0.38, 0.35999999999999999, 0.26000000000000001] Recall that a sample from a Dirichlet process is a distribution that approximates (but varies from) the base distribution. In this case, a sample from the Dirichlet process will be a distribution over topics that varies from the uniform distribution we provided as a base. If we use the stick-breaking metaphor, we are effectively breaking a stick one time and the size of each portion corresponds to the proportion of a topic in the document. To construct a sample from the DP, we need to again define our DP class: In [6]: from scipy.stats import beta from numpy.random import choice class DirichletProcessSample(): def __init__(self, base_measure, alpha): self.base_measure = base_measure self.alpha = alpha self.cache = [] self.weights = [] self.total_stick_used = 0. def __call__(self): remaining = 1.0 - self.total_stick_used i = DirichletProcessSample.roll_die(self.weights + [remaining]) if i is not None and i < len(self.weights) : return self.cache[i] else: stick_piece = beta(1, self.alpha).rvs() * remaining self.total_stick_used += stick_piece self.weights.append(stick_piece) new_value = self.base_measure() self.cache.append(new_value) return new_value @staticmethod def roll_die(weights): if weights: return choice(range(len(weights)), p=weights) else: return None For each document, we will draw a topic distribution from the Dirichlet process: In [7]: topic_distribution = DirichletProcessSample(base_measure=base_distribution, alpha=topic_dirichlet_parameter) A sample from this topic distribution is a distribution over terms. However, unlike our base distribution which returns each term distribution with equal probability, the topics will be unevenly weighted. In [8]: for topic, count in Counter([tuple(topic_distribution()) for _ in range(10000)]).most_common(): print("count:", count, "topic:", [round(prob, 2) for prob in topic]) count: 9589 topic: [0.38, 0.35999999999999999, 0.26000000000000001] count: 411 topic: [0.40999999999999998, 0.02, 0.56999999999999995] To generate each word in the document, we draw a sample topic from the topic distribution, and then a term from the term distribution (topic). In [9]: topic_index = defaultdict(list) documents = defaultdict(list) for doc in range(num_documents): topic_distribution_rvs = DirichletProcessSample(base_measure=base_distribution, alpha=topic_dirichlet_parameter) document_length = poisson(mean_document_length).rvs() for word in range(document_length): topic_distribution = topic_distribution_rvs() topic_index[doc].append(tuple(topic_distribution)) documents[doc].append(choice(vocabulary, p=topic_distribution)) Here are the documents we generated: In [10]: for doc in documents.values(): print(doc) ['see', 'run', 'see', 'spot', 'see', 'spot'] ['see', 'run', 'see'] ['see', 'run', 'see', 'see', 'run', 'spot', 'spot'] ['run', 'run', 'run', 'spot', 'run'] ['run', 'run', 'see', 'spot', 'run', 'run'] We can see how each topic (term-distribution) is distributed across the documents: In [11]: for i, doc in enumerate(Counter(term_dist).most_common() for term_dist in topic_index.values()): print("Doc:", i) for topic, count in doc: print(5*" ", "count:", count, "topic:", [round(prob, 2) for prob in topic]) Doc: 0 count: 6 topic: [0.38, 0.35999999999999999, 0.26000000000000001] Doc: 1 count: 3 topic: [0.40999999999999998, 0.02, 0.56999999999999995] Doc: 2 count: 5 topic: [0.40999999999999998, 0.02, 0.56999999999999995] count: 2 topic: [0.38, 0.35999999999999999, 0.26000000000000001] Doc: 3 count: 5 topic: [0.38, 0.35999999999999999, 0.26000000000000001] Doc: 4 count: 5 topic: [0.40999999999999998, 0.02, 0.56999999999999995] count: 1 topic: [0.38, 0.35999999999999999, 0.26000000000000001] To recap: for each document we draw a sample from a Dirichlet Process. The base distribution for the Dirichlet process is a categorical distribution over term distributions; we can think of the base distribution as an $n$-sided die where $n$ is the number of topics and each side of the die is a distribution over terms for that topic. By sampling from the Dirichlet process, we are effectively reweighting the sides of the die (changing the distribution of the topics). For each word in the document, we draw a sample (a term distribution) from the distribution (over term distributions) sampled from the Dirichlet process (with a distribution over term distributions as its base measure). Each term distribution uniquely identifies the topic for the word. We can sample from this term distribution to get the word. Given this formulation, we might ask if we can roll an infinite sided die to draw from an unbounded number of topics (term distributions). We can do exactly this with a Hierarchical Dirichlet process. Instead of the base distribution of our Dirichlet process being a finite distribution over topics (term distributions) we will instead make it an infinite Distribution over topics (term distributions) by using yet another Dirichlet process! This base Dirichlet process will have as its base distribution a Dirichlet distribution over terms. We will again draw a sample from a Dirichlet Process for each document. The base distribution for the Dirichlet process is itself a Dirichlet process whose base distribution is a Dirichlet distribution over terms. (Try saying that 5-times fast.) We can think of this as a countably infinite die each side of the die is a distribution over terms for that topic. The sample we draw is a topic (distribution over terms). For each word in the document, we will draw a sample (a term distribution) from the distribution (over term distributions) sampled from the Dirichlet process (with a distribution over term distributions as its base measure). Each term distribution uniquely identifies the topic for the word. We can sample from this term distribution to get the word. These last few paragraphs are confusing! Let's illustrate with code. In [12]: term_dirichlet_vector = num_terms * [term_dirichlet_parameter] base_distribution = lambda: dirichlet(term_dirichlet_vector).rvs(size=1)[0] base_dp_parameter = 10 base_dp = DirichletProcessSample(base_distribution, alpha=base_dp_parameter) This sample from the base Dirichlet process is our infinite sided die. It is a probability distribution over a countable infinite number of topics. The fact that our die is countably infinite is important. The sampler base_distribution draws topics (term-distributions) from an uncountable set. If we used this as the base distribution of the Dirichlet process below each document would be constructed from a completely unique set of topics. By feeding base_distribution into a Dirichlet Process (stochastic memoizer), we allow the topics to be shared across documents. In other words, base_distribution will never return the same topic twice; however, every topic sampled from base_dp would be sampled an infinite number of times (if we sampled from base_dp forever). At the same time, base_dp will also return an infinite number of topics. In our formulation of the the LDA sampler above, our base distribution only ever returned a finite number of topics (num_topics); there is no num_topics parameter here. Given this setup, we can generate documents from the hierarchical Dirichlet process with an algorithm that is essentially identical to that of the original latent Dirichlet allocation generative sampler: In [13]: nested_dp_parameter = 10 topic_index = defaultdict(list) documents = defaultdict(list) for doc in range(num_documents): topic_distribution_rvs = DirichletProcessSample(base_measure=base_dp, alpha=nested_dp_parameter) document_length = poisson(mean_document_length).rvs() for word in range(document_length): topic_distribution = topic_distribution_rvs() topic_index[doc].append(tuple(topic_distribution)) documents[doc].append(choice(vocabulary, p=topic_distribution)) Here are the documents we generated: In [14]: for doc in documents.values(): print(doc) ['spot', 'spot', 'spot', 'spot', 'run'] ['spot', 'spot', 'see', 'spot'] ['spot', 'spot', 'spot', 'see', 'spot', 'spot', 'spot'] ['run', 'run', 'spot', 'spot', 'spot', 'spot', 'spot', 'spot'] ['see', 'run', 'see', 'run', 'run', 'run'] And here are the latent topics used: In [15]: for i, doc in enumerate(Counter(term_dist).most_common() for term_dist in topic_index.values()): print("Doc:", i) for topic, count in doc: print(5*" ", "count:", count, "topic:", [round(prob, 2) for prob in topic]) Doc: 0 count: 2 topic: [0.17999999999999999, 0.79000000000000004, 0.02] count: 1 topic: [0.23000000000000001, 0.58999999999999997, 0.17999999999999999] count: 1 topic: [0.089999999999999997, 0.54000000000000004, 0.35999999999999999] count: 1 topic: [0.22, 0.40000000000000002, 0.38] Doc: 1 count: 2 topic: [0.23000000000000001, 0.58999999999999997, 0.17999999999999999] count: 1 topic: [0.17999999999999999, 0.79000000000000004, 0.02] count: 1 topic: [0.35999999999999999, 0.55000000000000004, 0.089999999999999997] Doc: 2 count: 4 topic: [0.11, 0.65000000000000002, 0.23999999999999999] count: 2 topic: [0.070000000000000007, 0.65000000000000002, 0.27000000000000002] count: 1 topic: [0.28999999999999998, 0.65000000000000002, 0.070000000000000007] Doc: 3 count: 2 topic: [0.17999999999999999, 0.79000000000000004, 0.02] count: 2 topic: [0.25, 0.55000000000000004, 0.20000000000000001] count: 2 topic: [0.28999999999999998, 0.65000000000000002, 0.070000000000000007] count: 1 topic: [0.23000000000000001, 0.58999999999999997, 0.17999999999999999] count: 1 topic: [0.089999999999999997, 0.54000000000000004, 0.35999999999999999] Doc: 4 count: 3 topic: [0.40000000000000002, 0.23000000000000001, 0.37] count: 2 topic: [0.42999999999999999, 0.17999999999999999, 0.40000000000000002] count: 1 topic: [0.23000000000000001, 0.29999999999999999, 0.46000000000000002] Our documents were generated by an unspecified number of topics, and yet the topics were shared across the 5 documents. This is the power of the hierarchical Dirichlet process! This non-parametric formulation of Latent Dirichlet Allocation was first published by Yee Whye Teh et al. Unfortunately, forward sampling is the easy part. Fitting the model on data requires complex MCMC or variational inference. There are a limited number of implementations of HDP-LDA available, and none of them are great.
2023-03-26 09:41:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7635471820831299, "perplexity": 3968.3239077761823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945440.67/warc/CC-MAIN-20230326075911-20230326105911-00252.warc.gz"}
https://try.entitylinq.com/docs/SqlServerTutorial/Update.md
Try .NET /elinq/ # SQL Server UPDATE EF does a thorough work to track entity state. In cases where the fact of change is not clear, it's usually better to let EF to manage the update. ELINQ (pure SQL) is preferred when we don't want to retrieve the entity or a bulk update is needed. ### 1) Update a single column in all rows example var rows = DbContext.Database.Query((Taxes taxes) => UPDATE(taxes).SET(() => taxes.UpdatedAt = GETDATE())); Console.WriteLine($"{rows} rows affected"); ### 2) Update multiple columns example var one = 0.01M; var two = 0.02M; var rows = DbContext.Database.Query((Taxes taxes) => { UPDATE(taxes) .SET(() => { taxes.MaxLocalTaxRate += two; taxes.AvgLocalTaxRate += one; }); WHERE(taxes.MaxLocalTaxRate == one); }); Console.WriteLine($"{rows} rows affected");
2021-07-28 18:08:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24031929671764374, "perplexity": 6324.6919529379575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153739.28/warc/CC-MAIN-20210728154442-20210728184442-00078.warc.gz"}
https://ask.libreoffice.org/en/question/48510/is-it-possible-to-change-fontsstyles-and-spellcheck-memo-fields-in-libreoffice-base/
# Is it possible to change fonts/styles and spellcheck memo fields in LibreOffice Base ? I am newbie when it comes to LO, and am using an Apple Mac with OSX 10.7.5 with Filemaker Pro 10 database software. Ive looked in this forum, but I cant find anything concerning this topic. I've been using Filemaker Pro 10 to store essays and articles as memo fields in a database, sorted on a date field. In Filemaker I can change styles in the database forms, such as using italics, bold, changing fonts and their colours. Despite the form being set to a default font, I can cut and paste the articles which have different styles to the forms defaults, or edit them in situ, and Filemaker doesn't complain. I can also spell check the memo fields. However, I would like to escape from Filemaker's expensive clutches (The FM Pro 13 version is nearly £200) However I can't do these things with Base memo fields (Im using LO Version: 4.3.4.1). Using the forms, the spellchecking icons are greyed out, tho I can spellcheck in Writer with no problems (so the UK dictionary is working). Also I can't change the font styles, you seem to be stuck with what you which you initially create. Am I not a programmer, but have had a look at the form's control and form attributes, and I can't see a way of override the form's defaults. Perhaps its not possible to do this with Base (at least not without a lot of work). Any help and ideas would be appreciated edit retag close merge delete Can you confirm: are you using a grid or a normal form? If the latter, have you opened in edit mode, and selected the text control (and confirm what kind of conrol it is) alone (Ctrl-click) open the Properties dialog and in the general tab click on the ... in the dialog box, try to accomplish what you want that way?
2019-07-20 13:35:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21820741891860962, "perplexity": 1980.2277262895022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526517.67/warc/CC-MAIN-20190720132039-20190720154039-00418.warc.gz"}
https://datascience.stackexchange.com/questions/19630/convnet-training-error-does-not-decrease
# Convnet training error does not decrease I'm training a convoluted neural net to drive a toy car, and no matter what I do the training accuracy does not increase beyond 30-35%, which is where it starts when the convnet is randomly initialized. What's strange is that a much simpler model, a neural network with a single hidden layer and no convolution, does substantially better, consistently getting accuracy of 65-75%. I've been working on this project for over a year and feel like I've tried everything to make the convnet better. What am I doing wrong? Notes: • Dataset contains 150,000 records, but since my video feed generates 20 frames per second and since frames don't change much from second to second, it's more like 150,000 / 20 = 7,500 unique records. • Three equally proportioned classes: turn left, go straight, turn right • Both the convnet and simple NN use Tensorflow's AdamOptimizer. The simple net does well with 1e-5 and 1e-4, but the convnet doesn't do well with any value, I've tried 1e-2 all the way through 1e-6 • Simple NN uses the sigmoid activation function, the convnet is all relu activations • Both models read the same data. I have a single sampling and data augmentation class that's shared among all my models, so my data isn't bad because the simple net works fine on the same inputs • My convnet can't even overfit on the training data, so it's not data size that's the issue, in my opinion • Overfitting doesn't seem to be a problem with either model: training and validation sets tend to have similar performance, give or take 5% • I'm using initial = tf.truncated_normal(shape, stddev=0.1) to initialize all weights • I'm using initial = tf.constant(0.1, shape=shape) to initialize all biases • It could be specifically the convolution that's the problem, since a shallow single-hidden-layer convnet did poorly but a two-hidden-layer fully-connected neural net with no convolution gets around 65% accuracy Convnet Notes (all yield same poor results): • Batch normalization • Max pooling • 50% drop-out probability • Various stride sizes • Various depths of layers: I've tried 2-5 convent layers, multiple 2-4 fully connected layers. I've had as few as 3 layers and as many as 9. Convnet layers have between 32 and 64 neurons, fully connected layers have between 32 and 512 neurons. • 3D convolution (took too much memory and caused my GPU to crash) • 1x1 convolutions • The one thing I haven't tried is transfer learning, and I hope to only use that as a last resort since the simple net works fine Code: • What happens if you keep training on one instance, will it overfit on this sample? Jun 12, 2017 at 8:58 • Don't compute cross entropy that way: github.com/tensorflow/tensorflow/issues/2462 Jun 14, 2017 at 13:36 • "DO NOT use this code for training. Use tf.nn.softmax_cross_entropy_with_logits() instead." Jun 14, 2017 at 13:37 • @JanvanderVegt The training accuracy doesn't improve on the same batch of 50 images, so something is clearly wrong. The single layer neural network is able to overfit when fed on the same single batch data though. Jun 17, 2017 at 4:31 • @kbrose Your suggestion did the trick! I tried it out and my convnet is now comparable to my fully-connected net. If you write up your answer I'll accept it. Jun 17, 2017 at 23:41 In your convnet code, you compute the cross entropy manually: cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv), reduction_indices=[1])) If you do that, you may run into numerical stability issues. Instead, you should use tf.nn.softmax_cross_entropy_with_logits() If using tensorflow version 1.8 or later, use tf.nn.softmax_cross_entropy_with_logits_v2() and weep at the state of tensorflow's API.
2022-05-22 02:02:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4679856300354004, "perplexity": 2334.3896824154235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543264.49/warc/CC-MAIN-20220522001016-20220522031016-00307.warc.gz"}